November 4, 2020 at 3:59 pm
I have some Full Text Catalogs and since some time ago, the query time really slow down, and the only solution I found until now is to rebuild the catalogs, sometimes more than once a day with command:
ALTER FULLTEXT CATALOG TABLE1 REBUILD
Why is necessary to rebuild the catalogs? There are other solutions for this problem?
After run the Rebuild command the query is instantaneous and when slow down could take several seconds or a minute.
My Catalogs have been created by this code:
DECLARE @catalog VARCHAR(255)
DECLARE @pkCatalog VARCHAR(255)
SET @catalog = 'TABLE1'
SET @pkCatalog = 'PK_' + @catalog
EXEC sp_fulltext_database 'enable'
EXEC sp_fulltext_catalog @catalog, 'create'
EXEC sp_fulltext_table @catalog, 'create', @catalog, @pkCatalog
----------------------------------- INICIO CURSOR: cursorColumn
DECLARE cursorColumn CURSOR
FOR
select name from syscolumns where id=object_id(@catalog) and xtype = 167
FOR READ ONLY
OPEN cursorColumn
DECLARE @colName SYSNAME
FETCH NEXT FROM cursorColumn INTO @colName
WHILE (@@fetch_status <> -1)
BEGIN
EXEC sp_fulltext_column @catalog,@colName,'add'
FETCH NEXT FROM cursorColumn INTO @colName
END
CLOSE cursorColumn
DEALLOCATE cursorColumn
------------------------------------- FIM CURSOR: cursorColumn
EXEC sp_fulltext_table @catalog,'activate'
EXEC sp_fulltext_table @catalog, 'Start_change_tracking'
EXEC sp_fulltext_table @catalog, 'Start_background_updateindex'
GO
November 4, 2020 at 5:34 pm
I guess my question would be... why are you creating a full text catalog for EVERY VARCHAR in the table?
I'll also ask why you're using code that has been deprecated for a very long time instead of using the new stuff?
--Jeff Moden
Change is inevitable... Change for the better is not.
November 4, 2020 at 9:22 pm
sounds like the volume has increased and they need to be rebuilt more often
For better, quicker answers, click on the following...
http://www.sqlservercentral.com/articles/Best+Practices/61537/
For better answers on performance questions, click on the following...
http://www.sqlservercentral.com/articles/SQLServerCentral/66909/
November 4, 2020 at 11:20 pm
For some reason (perhaps someone clicked on report instead of reply by accident), my previous post was reported as spam, so trying again...
I have a couple of questions...
Important
This feature is in maintenance mode and may be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development work, and plan to modify applications that currently use this feature. Use CREATE FULLTEXT INDEX, ALTER FULLTEXT INDEX, and DROP FULLTEXT INDEX instead.
--Jeff Moden
Change is inevitable... Change for the better is not.
November 5, 2020 at 1:14 pm
For some reason (perhaps someone clicked on report instead of reply by accident), my previous post was reported as spam, so trying again...
Saw that. Fixed it.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
November 5, 2020 at 1:24 pm
Why does your code setup full text catalogues on EVERY VARCHAR column in the table? Is that really necessary? Depending on the schema of your table, that could be a major part of the problem you're experiencing.
Well, I need to fulltext search in all columns.
Why are you using sp_fulltext_table? It has been deprecated for quite a while now. Here's the warning about it from BOL...
Really thanks for this information, anyway I think that this is not what is degrading the performance, or can be?
Thanks & regards,
Rui
November 5, 2020 at 1:54 pm
Jeff Moden wrote:Why does your code setup full text catalogues on EVERY VARCHAR column in the table? Is that really necessary? Depending on the schema of your table, that could be a major part of the problem you're experiencing.
Well, I need to fulltext search in all columns.
Jeff Moden wrote:Why are you using sp_fulltext_table? It has been deprecated for quite a while now. Here's the warning about it from BOL...
Really thanks for this information, anyway I think that this is not what is degrading the performance, or can be?
Thanks & regards,
Rui
I'm trying to begin determining what is degrading your performance because you've not provided much information, especially about the table in question. Can you post the CREATE TABLE statement for it? Please include the Insert/Update rate and pattern for this table. If possible, by column would work.
And, yep... I know the old stuff probably isn't the problem but, if you didn't know that information, it could mean that you could have a serious problem coming up in the future.
--Jeff Moden
Change is inevitable... Change for the better is not.
November 5, 2020 at 3:58 pm
Must admit I've not come across a problem like this, but then I do not have large catalogs.
However one problem I have found with performance is trying to do all at once, ie search a catalog together with table joins, sometimes the query estimator will produce a poor plan.
Now I do the full text query first to get id's into a temp table and then do table joins.
Far away is close at hand in the images of elsewhere.
Anon.
November 5, 2020 at 4:39 pm
Here is the table:
CREATE TABLE [dbo].[ENTIDADE](
[COD_ENTD_ID] [decimal](18, 0) IDENTITY(0,1) NOT NULL,
[COD_ENTD] [varchar](20) NULL,
[COD_TIPO_ENTD_ID] [decimal](18, 0) NOT NULL,
[DES_SIGL] [varchar](100) NOT NULL,
[DES_DESG_SOCL] [varchar](100) NULL,
[DES_MORD] [varchar](100) NULL,
[DES_LOCL] [varchar](100) NULL,
[DES_CODG_POST] [varchar](50) NULL,
[COD_PAIS_ID] [decimal](18, 0) NULL,
[COD_DIST_ID] [decimal](18, 0) NULL,
[DES_TELF] [varchar](30) NULL,
[DES_FAX] [varchar](30) NULL,
[DES_EMAL] [varchar](50) NULL,
[DES_WWW_SITE] [varchar](100) NULL,
[DES_NUM_CONT] [varchar](20) NULL,
[DES_OBSR_ENTD] [varchar](255) NULL,
[DTH_INSR] [datetime] NOT NULL,
[COD_UTLZ_ID] [decimal](18, 0) NOT NULL,
[COD_UTLZ_ID_ALTR] [decimal](18, 0) NOT NULL,
[DTH_ALTR] [datetime] NOT NULL,
[ENM_LIXO] [bit] NOT NULL,
[ENM_ACTV] [bit] NOT NULL,
[DES_CONC] [varchar](50) NULL,
[DES_FRGS] [varchar](50) NULL,
[DES_REGM_JURD] [varchar](50) NULL,
[DES_PRPR122] [varchar](32) NULL,
CONSTRAINT [PK_ENTIDADE] PRIMARY KEY CLUSTERED
(
[COD_ENTD_ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY],
CONSTRAINT [IX_ENTIDADE] UNIQUE NONCLUSTERED
(
[DES_SIGL] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[ENTIDADE] ADD CONSTRAINT [DF_ENTIDADE__ENM_LIXO] DEFAULT (0) FOR [ENM_LIXO]
GO
ALTER TABLE [dbo].[ENTIDADE] ADD CONSTRAINT [DF_ENTIDADE__ENM_ACTV] DEFAULT (1) FOR [ENM_ACTV]
GO
ALTER TABLE [dbo].[ENTIDADE] WITH CHECK ADD CONSTRAINT [FK_ENTIDADE_PAIS] FOREIGN KEY([COD_PAIS_ID])
REFERENCES [dbo].[PAIS] ([COD_PAIS_ID])
GO
ALTER TABLE [dbo].[ENTIDADE] CHECK CONSTRAINT [FK_ENTIDADE_PAIS]
GO
ALTER TABLE [dbo].[ENTIDADE] WITH CHECK ADD CONSTRAINT [FK_ENTIDADE_PAIS_DISTRITO] FOREIGN KEY([COD_DIST_ID])
REFERENCES [dbo].[PAIS_DISTRITO] ([COD_DIST_ID])
GO
ALTER TABLE [dbo].[ENTIDADE] CHECK CONSTRAINT [FK_ENTIDADE_PAIS_DISTRITO]
GO
ALTER TABLE [dbo].[ENTIDADE] WITH CHECK ADD CONSTRAINT [FK_ENTIDADE_TIPO_ENTIDADE] FOREIGN KEY([COD_TIPO_ENTD_ID])
REFERENCES [dbo].[TIPO_ENTIDADE] ([COD_TIPO_ENTD_ID])
GO
ALTER TABLE [dbo].[ENTIDADE] CHECK CONSTRAINT [FK_ENTIDADE_TIPO_ENTIDADE]
GO
ALTER TABLE [dbo].[ENTIDADE] WITH CHECK ADD CONSTRAINT [fk1_entidade_utilizador] FOREIGN KEY([COD_UTLZ_ID])
REFERENCES [dbo].[UTILIZADOR] ([COD_UTLZ_ID])
GO
ALTER TABLE [dbo].[ENTIDADE] CHECK CONSTRAINT [fk1_entidade_utilizador]
GO
ALTER TABLE [dbo].[ENTIDADE] WITH CHECK ADD CONSTRAINT [fk2_entidade_utilizador] FOREIGN KEY([COD_UTLZ_ID_ALTR])
REFERENCES [dbo].[UTILIZADOR] ([COD_UTLZ_ID])
GO
ALTER TABLE [dbo].[ENTIDADE] CHECK CONSTRAINT [fk2_entidade_utilizador]
GO
November 6, 2020 at 2:27 pm
To be honest, the column names are a bit cryptic but making some guesses about the column names, I'd say that the DES_OBSR_ENTD is likely the only column that would really benefit from Full Text Searches (FTS), not because it's the widest but because it seems likely to be the only place where separate words might be useful in searches.
With that, I say (and I implied it before) that you're seriously "over indexed" for FTS to little or no benefit on most of the VARCHAR columns that you have created. I would strongly recommend that someone sit down and actually figure out which columns can actually take advantage of FTS functionality and limit to those one or two columns.
--Jeff Moden
Change is inevitable... Change for the better is not.
November 6, 2020 at 2:36 pm
The purpose of including all columns in the FullText catalog, is the ability to search directly in all columns and, find all the records of the table that in at least one column answer a search criterion. Basically, to enable a “google” type search based on all columns.
If I don't add all the columns in the catalog, how can I do this?
Thanks,
Rui
November 6, 2020 at 3:52 pm
To be honest, I've never seen anyone use it in the manner you've just described and that was a misunderstanding on my part because I've only used it for columns that contain multiple "words" in a given column per row and then only for single column searches. That, unfortunately, also means that I don't know a way around your particular problem. It IS something that I'm going to add to my to do list of things to study and so I thank you for that.
--Jeff Moden
Change is inevitable... Change for the better is not.
November 6, 2020 at 4:16 pm
Personally I would concatenate the varchar columns into a single column and depending on how volatile the data is, either a computed column or a separate table and full text index that column. The advantage would be search one column instead of many.
Performance would play a big part in which method you use.
Far away is close at hand in the images of elsewhere.
Anon.
November 7, 2020 at 9:00 am
If I concatenate the varchar columns into a single column, I will (almost) duplicate the table size. That don’t look so good either and I don't like this idea very much.
By other side, will I get really better performance?
With the concatenated column, or with separated columns (as it is now), I will have the same number of terms in the catalog. And if I will have the same number of terms in catalog, the query will not have the same performance?
Thanks & regards,
Rui
November 8, 2020 at 4:07 pm
If I concatenate the varchar columns into a single column, I will (almost) duplicate the table size. That don’t look so good either and I don't like this idea very much.
By other side, will I get really better performance?
With the concatenated column, or with separated columns (as it is now), I will have the same number of terms in the catalog. And if I will have the same number of terms in catalog, the query will not have the same performance?
Thanks & regards,
Rui
Sometimes it's worth increasing disk space usage for performance. That's exactly what indexes do. When you get right down to it, Non-Clustered Indexes are nothing more than a duplication of data with a different sort order dictated by different keys.
However, I have to agree that the concatenated column idea would likely have a pretty low ROI in the disk space vs performance trade off.
Although it would be a surprise to me if the concatenated column substantially improved performance, I've been surprised by things SQL Server does before. One good test result is certainly worth a thousand supposed expert opinions and, despite my personal misgivings on the subject, I'd likely set up a smaller test system and give it a try.
More importantly, though, I'd be looking at the use-case that requires the "Google-like" functionality to first see if it's actually a necessity. In that bit of a study, it may lend ideas as to how to do things a little differently for a higher ROI and lower maintenance woes.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 1 through 14 (of 14 total)
You must be logged in to reply to this topic. Login to reply