March 20, 2009 at 10:06 am
i set up a server last year to hold data from event logs on different servers. basic process is that i use logparser to dump the security and app logs into a staging table on the server. the next morning i have a job that imports the data from the staging table to the reporting table and truncates the staging table.
i was in a hurry and didn't have time to set up FTS properly and when i had some performance issues during the import i just wrote two quick sp's. one to disable full text indexing and one to enable it on all the tables. the disable runs first, then the imports and then the enable. revisiting this to improve query performance.
last two weeks i went back and created timestamp columns on all the reporting tables that have FTI's. and a non-clustered index on each one. then i set up manual change tracking and each job runs once an hour. and i set it up not to disable FTI anymore when the imports run.
now my problem is that the import time went from 15 minutes to several hours. i don't see any blocking except when an incremental update job tries to run when that server's table is getting new data. just the imports take longer now
Every server has 2 tables, a staging table to receive data from the server and a real table that people access when running reports.
March 23, 2009 at 11:28 am
You mention a TimeStamp column and a non-clustered index. Do you have a primary key defined? Do you have a clustered index defined?
Here is link that provides some quidance about clustered indexes versus heaps
http://www.sql-server-performance.com/tips/clustered_indexes_p1.aspx
March 23, 2009 at 11:30 am
yes, my PK is a generic identity/bigint column and my clustered index is the timegenerated of the event. reason i'm using a datetime column is my query looks 3 days back and i had issues where too much data resulted in an index scan.
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply