April 3, 2011 at 8:25 am
I will have to respectfully disagree on the notion that partitioning is foremost a management feature, not a performance feature.
There are certainly good management benefits to be had from partitioning a VLDB, not least of which is improving the options for backup and recoverability.
But I believe that partitioning is at least equally importantly, if not even more so, a performance feature.
The Database Engine Tuning Advisor for example is a tool devoted to improving performance of individual queries and workload. It is not used for improving manageability.
Yet this tool will happily suggest a partitioning strategy for you in order to improve performance.
I do not believe that the various partitioning options available in the DETA would be there if the engineers behind SQL Server did not see partitioning as a performance feature.
Here is a very good Microsoft white paper on partitioning written by Kimberly Tripp. In the very first sentence of the paper she mentions performance.
http://msdn.microsoft.com/en-us/library/ms345146(v=sql.90).aspx
And another Microsoft article here entitled "Designing Partitions to Improve Query Performance".
http://msdn.microsoft.com/en-us/library/ms177411.aspx
In fact a quick Google on "sql server partitioning performance" yields a wealth of similar material written in some cases by very well respected authors.
From my own humble experience, partitioning a large table where queries are predicated on the partition key can improve query response time by a whole order of magnitude.
It's certainly possible to make mistakes when implementing partitioning. But with a development system to try things out on and some sample queries representative of the kind of thing being thrown at the production database, I think you have a pretty good chance of getting things right - or at least right enough.
If a typical query takes 2 minutes to run before partitioning, and 10 seconds to run afterwards, then you are clearly doing something right. That kind of performance gain at next to no cost is a real win for any organisation.
Perhaps an expensive consultant with an expert knowledge of partitioning, indexing and T-SQL could spend a few days getting that same response time down to 6 seconds, but for many organisations that small amount of gain at such a substantial cost is just not a worthwhile proposition.
April 3, 2011 at 8:45 am
Jako,
From what you've said, it sounds like your ideal partition key would be a combination of year and week. Unfortunately multiple columns for a partition key are not allowed.
However one way round that is to create a persisted computed column combining the two columns and then partition on that.
Is their any mileage for you doing this with year and week?
April 4, 2011 at 1:12 am
Hi Andy
Well, the system is a sales record from 2003, split into year+week.
most queries run only on the current and last week, but we also have queries that run year vs year etc.
So firstly my thinking was to partition on the year at least. I also have a filtered index on the Week for 2011, so that should be sufficient for now I guess.
Query response times have improved, for now I think it is good enough.
Thanks for all the help, learned a lot about partitioning the last week or so.
August 14, 2017 at 1:47 pm
HI. I am also exploring partitioning of a large very active database. Basic question about file groups. Is there a best practice for their organization? For example I was thinking one file group should represent one table and contain all the partition files for that table unless , of course, we are spanning disks.
Thanks in advance.
Paul
August 14, 2017 at 2:52 pm
6-year old thread. Please can you post new questions in a new thread and give as much detail as possible.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 5 posts - 16 through 19 (of 19 total)
You must be logged in to reply to this topic. Login to reply