March 11, 2013 at 2:59 pm
We have a lookup table GeographicLookup table with a primary key called gisGeorgraphicLookup.
It now has 2 million records, with each year we have about 550000 rows records ranging year 2010-2014 .
Could many records like this reduce performance or reduce the speed for lookup?
If we only use most recently a couple years data, should we archive other records into another table?
or any other recommendations, we are using SQL 2008 standard edition.
March 11, 2013 at 3:38 pm
Yes - many records can impact performance.
If your primary activity is only against the most recent couple of years, then I would look to archive the data into a different table or database.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
March 12, 2013 at 10:08 am
In general terms a couple of million rows is small and with efficient indexes there would be no reason to do anything. Generally I'd probably partition but I don't think that's an option for STD edition.
If you're having performance issues then you might need to do some work, I work with tables with several hundreds of millions of rows - get your indexes right and no worries.
If you don't actually want the data any more then either remove it or put it in another table/database as preciously suggested.
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
March 12, 2013 at 10:44 am
It is enterprise only, though you can still use a multi-table setup instead of partitioning a big table.... though i can only see that being worse if the current dba quits and the second one has to figure that all out.
Here are two good links for more information.
http://www.mssqltips.com/sqlservertip/1914/sql-server-database-partitioning-myths-and-truths/
http://www.brentozar.com/archive/2008/06/sql-server-partitioning-not-the-answer-to-everything/
.
March 14, 2013 at 1:29 pm
not sure if it would increase the performance or not but another option is a filtered index (possibly one for each year?). Would deffiniatly need to test in DEV and QA before moving it into production but its another option to archiving the data off to another table (same DB Or otherwise.)
For performance Issues see how we like them posted here: How to Post Performance Problems - Gail Shaw[/url]
Need to Split some strings? Jeff Moden's DelimitedSplit8K[/url]
Jeff Moden's Cross tab and Pivots Part 1[/url]
Jeff Moden's Cross tab and Pivots Part 2[/url]
March 14, 2013 at 10:13 pm
If you have bad code, partitioning won't help performance at all and it won't help much even if you have good code. If you have bad code, archiving data will help but the best thing to do would be to make sure your code follows all the best practices, uses SARGable criteria so that indexes can be used, and that proper indexing isn't only present but is also maintained along with the related statistics.
If you do that, then it's not going to matter how big the table gets. The real purpose of partitioning has nothing to do with query performance. It has to do with how long nightly maintenance of the index takes.
The only good thing about archiving is that it makes backups and restores shorter should anything ever go wrong.
Performance is in the code... if it's not there, there's not much you can do. You might be able to double the speed of bad code by archiving. You might be able to get the bad code to run 60X times faster if you fix it.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply