July 30, 2015 at 3:02 pm
We have a very big table which is over 300gb with over half billion rows in it. This table is used for audit purpose for all critical applications. A lot of inserts are happening per seconds. Due to heavy blocking, we can't purge data while the database is online.
My boss is saying by partitioning and compressing this table will help the disk space and performance. He is from OLAP DW area. But I highly doubt this will work in OLTP. How do I convince him this will not work? Any articles?
July 30, 2015 at 4:08 pm
You should be able to purge/archive older data without any blocking, unless someone is constantly reading old data too.
What is the table clustered on?
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
July 31, 2015 at 4:58 am
Partitioning very rarely helps performance within an OLTP situation. However, if you are looking at a situation where you're rolling off old data, partitioning becomes a pretty viable approach, in fact, that's really what it's built for.
As Scott asked, where you have your cluster is really going to matter, especially with partitioning. Also, with regards to your existing queries and processes, will you be able to ensure that the queries will always have the partitioning key as part of the filter? If not, you may want to avoid partitioning entirely because scans across partitions is very costly, more than across a single table.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply