August 24, 2016 at 3:53 am
I have a database with around 120K records, filesize just over 500MB. I keep reading about partitioning, and have been considering trying it, but it seems a fair amount of work. Is it worth even considering for a database this size? It will continue to grow, but not in any extreme fashion - it has gone from 50K records to the current 120K over a period of about eight years, and there is no reason to expect any major growth spurts in the future. There is no strict archiving activity in it - anything recorded at any time during its life may be accessed by anyone at any time. The only somewhat reasonable feature by which to partition would be a catalog index letter, into around 20 segments of hugely varying size. One segment would have almost 40K records, a few would have less than 100. Users tend to concentrate their work into one or several catalog groups - one user may have several catalogs, so I would group all the catalogs belonging to one user into a single partition (if that's possible).
Does this sound like it's even worth messing with? Current response time for most queries is sub-second, simultaneous users are generally no more than two or three, and even that rarely.
August 24, 2016 at 3:58 am
For me partitioning is about manageability first and performance second.
Your MDF is 500MB with 8 years growth? I'm not so sure you need to go down this route... don't forget you need enterprise edition too.
August 24, 2016 at 4:03 am
I agree with you, Partitioning main purpose is to maintain the database but not for performance.
Thanks,
Durga Prasad.
August 24, 2016 at 4:05 am
BLOB_EATER (8/24/2016)
For me partitioning is about manageability first and performance second.Your MDF is 500MB with 8 years growth? I'm not so sure you need to go down this route... don't forget you need enterprise edition too.
Aha, thanks. That settles it - I don't have Enterprise edition.
I was more thinking in terms of learning something potentially useful, and maybe getting some performance benefit in the process. But this is completely manageable as a whole, so I guess there would be little point in it even if I did have the Enterprise version.
Appreciate the input.
August 24, 2016 at 6:19 am
The biggest driving factor for seeing any kind of performance benefit (and more frequently, for avoiding a performance degradation), is the ability to absolutely ensure that the overwhelming majority of all the queries against a partitioned table use the partitioning key for filtering. If they don't, performance is extremely bad. Even before worrying about Enterprise (although, clearly, an issue), you need to ensure that you can meet this requirement. Many times this isn't possible, making partitioning a very poor choice when it comes to performance.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
August 24, 2016 at 7:41 am
pdanes (8/24/2016)
BLOB_EATER (8/24/2016)
For me partitioning is about manageability first and performance second.Your MDF is 500MB with 8 years growth? I'm not so sure you need to go down this route... don't forget you need enterprise edition too.
Aha, thanks. That settles it - I don't have Enterprise edition.
I was more thinking in terms of learning something potentially useful, and maybe getting some performance benefit in the process. But this is completely manageable as a whole, so I guess there would be little point in it even if I did have the Enterprise version.
Appreciate the input.
Don't give up on the learning aspect of things, though. If you don't have one already, grab a copy of Developer Edition (same features as Enterprise) and if your employer approves, set it up on your work PC and grab a copy of your database to use for testing. Putter around with setting up partitioning against it to get a sense of what's involved, grab queries that are run against your production DB and run them against your sample (both pre- and post-partitioned) and see what impact that has on query plans and such.
Never stop learning!
😀
August 24, 2016 at 7:50 am
What partitioning brings to the table are things like partition switching, partial backups, partition level reindexing / compression, etc.
But 100K rows and 500MB small; you could fit that in an Excel spreadsheet.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
August 24, 2016 at 8:07 am
pdanes (8/24/2016)
BLOB_EATER (8/24/2016)
For me partitioning is about manageability first and performance second.Your MDF is 500MB with 8 years growth? I'm not so sure you need to go down this route... don't forget you need enterprise edition too.
Aha, thanks. That settles it - I don't have Enterprise edition.
I was more thinking in terms of learning something potentially useful, and maybe getting some performance benefit in the process. But this is completely manageable as a whole, so I guess there would be little point in it even if I did have the Enterprise version.
Appreciate the input.
Just to weigh in a bit.
First, I agree. A monolithic table of 100K+ rows and 500MB of storage isn't worth breaking up even when it comes to backups, restores, and index maintenance especially since the performance of a correctly indexed monolithic table is usually better than anything you could get with Partitioned Tables or Partitioned Views.
On that latter part and just as an FYI... you don't need the Enterprise Edition to do some great partitioning using Partitioned Views. Yes, it does have some caveats (as does Table Partitioning especially when it comes to foreign keys pointing at it), but you can effectively and easily to the equivalent of "SWITCHing" large amounts of data in and out with the added benefit of not having to concern yourself with "aligned" indexes and that can be really important. Older static partitions frequently benefit from different indexing than "current month" or "previous month" partitions (for example). You can't do that using Partitioned Tables without breaking things like "SWITCH" but you can with Partitioned Views (available in both Standard and the "Expensive" Editions). There is another caveat with IDENTITY columns in the Standard Edition but there's a fairly easy work around using an INSTEAD OF view to handle that.
There are several other advantages to Partitioned Views over Partitioned Tables, IMHO and I'm considering undoing Table Partitioning on a 500GB table (a WORM audit table, to be sure) in favor of using Partitioned Views. I have a bit more testing to do before I go that route, though. I drank the Kool-Aid once (which led me to Table Partitioning) and don't want to make a similar mistake.
--Jeff Moden
Change is inevitable... Change for the better is not.
August 24, 2016 at 8:12 am
BTW... if the table isn't critical to a DR restore, you might want to simply move it to another database and add a synonym for it. That would allow it to be backed up on a different schedule that the rest of the original database and allow restores to a Dev box to occur more quickly. If you keep an empty copy of the table in the original database as a standby, you could restore the original database and just repoint the synonym at the empty table and Bob's your uncle. I AM in the process of doing just that with some of my larger audit tables, which make up about 70% of the total volume of my main "money maker". This will allow for much quicker "get back in business" DR restores for me.
--Jeff Moden
Change is inevitable... Change for the better is not.
August 24, 2016 at 9:54 am
Jeff Moden (8/24/2016)
BTW... if the table isn't critical to a DR restore, you might want to simply move it to another database and add a synonym for it. That would allow it to be backed up on a different schedule that the rest of the original database and allow restores to a Dev box to occur more quickly. If you keep an empty copy of the table in the original database as a standby, you could restore the original database and just repoint the synonym at the empty table and Bob's your uncle. I AM in the process of doing just that with some of my larger audit tables, which make up about 70% of the total volume of my main "money maker". This will allow for much quicker "get back in business" DR restores for me.
A total restore of the entire database (and yes, I do test it regularly) takes on the order of 30 seconds. I do a full backup every night, a differential backup every hour and transaction log backup at :15, :30 and :45. Never had a failure yet, and I regularly 'restore' the most recent backup under another name to use as my development DB. This setup is not time-critical. One time a disk drive borked just as some of us were leaving town for several days for a conference. The solution was to shut down the server and deal with it when we came back. Remaining users simply did other work (of which there is plenty) for the few days I was gone. Came back, plugged in a new drive, let the RAID array put itself together overnight and the next morning, everything was like new. This is a very low-key environment. Correctness of data and having me available for quickies far outweighs any concern over things like percentage of uptime.
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply