July 28, 2015 at 10:29 am
I have a small set of tables that the stats are becoming stale. This is happening about 3 times a day. The table is about 40 million rows and we are constantly deleting and add rows. This causes the table to not grow quickly. We update statistics with fullscan every night. When stats become outdated during business hours we will do an update stats with a sample size of 10%. What would be the easiest way to resolve this without setting up a job to update stats every 4 hours.
I have an import process that deletes all data within the last 30 days, then imports the updated data. This runs every hour for roughly 140 clients. Sometimes this data is larger than what was deleted, sometimes it is smaller. The size of the table does not grow quick enough to auto update stats. We have toyed with the idea of adding filtered indexes to lessen the amount of time it takes to update stats but this is only a short term solution.
July 28, 2015 at 12:13 pm
Cole.Mietzner (7/28/2015)
I have a small set of tables that the stats are becoming stale. This is happening about 3 times a day. The table is about 40 million rows and we are constantly deleting and add rows. This causes the table to not grow quickly. We update statistics with fullscan every night. When stats become outdated during business hours we will do an update stats with a sample size of 10%. What would be the easiest way to resolve this without setting up a job to update stats every 4 hours.I have an import process that deletes all data within the last 30 days, then imports the updated data. This runs every hour for roughly 140 clients. Sometimes this data is larger than what was deleted, sometimes it is smaller. The size of the table does not grow quick enough to auto update stats. We have toyed with the idea of adding filtered indexes to lessen the amount of time it takes to update stats but this is only a short term solution.
How about using partitions to segregate the data by some portion of the date and/or time? I don't know how stats work when a table is partitioned, but if each partition had its own stats, this methodology might be able to assist in some way. The number of partitions might not need to be all that large, depending on just how far back the data in this table goes. Hopefully, someone with more experience with partitioned data can chime in and maybe I learn something...
Steve (aka sgmunson) 🙂 🙂 🙂
Rent Servers for Income (picks and shovels strategy)
Viewing 2 posts - 1 through 1 (of 1 total)
You must be logged in to reply to this topic. Login to reply