September 24, 2010 at 10:57 am
Hi,
I have a database that is losing performance very quickly when the db size grows large. If I run my insertion benchmark on a clean database, I can get 80 insertions per second. However, if I run it on a db that has about 25 GB of data already in it, it will start off at around 70 insertions per second, drop to 50 per second after 15 minutes, 40 per second after 30 minutes, and 20 per second after about 1.5 hours. The DB grew approx 2GB in the 1.5 hour period.
Thanks!
September 24, 2010 at 11:09 am
Firstly, don't reply on autogrow. If you know the DB is growing, manually increase the size of the files to a sensible value.
These may help in finding the problems.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
September 27, 2010 at 5:15 pm
yes, make sure you are not waiting on the file to grow. Create the file at a size that will accomodate all of your data and allow some room for new data. Same goes for the transaction log, make sure it is large enough to handle the size of the activity taking place between transaction log backups (Full recovery mode).
you don't mention indexes or maintaining them. If you are inserting into tables that have clustered and non-clustered indexes you might need to think about leaving room for the new inserts using padding and frequently rebuilding the indexes to reduce fragmentation and provide more padding for new rows.
September 28, 2010 at 3:59 am
If you *are* relying on autogrow, the default setting (increase data file size by 1Mb every time) is ludicrously small for pretty much any non-trivial database...increasing that (once you've got the database at a suitable size for the number of transactions occurring) would help.
October 5, 2010 at 11:52 am
October 6, 2010 at 7:20 am
Jake50515 (9/24/2010)
I have a database that is losing performance very quickly when the db size grows large. If I run my insertion benchmark on a clean database, I can get 80 insertions per second. However, if I run it on a db that has about 25 GB of data already in it, it will start off at around 70 insertions per second, drop to 50 per second after 15 minutes, 40 per second after 30 minutes, and 20 per second after about 1.5 hours. The DB grew approx 2GB in the 1.5 hour period.
Tell us more about your "insertion benchmark" table.
Do you have a clusterd index on your "insertion benchmark" table?
Is "insertion benchmark" process inserting out of order (based on clustered index column) rows?
_____________________________________
Pablo (Paul) Berzukov
Author of Understanding Database Administration available at Amazon and other bookstores.
Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply