January 31, 2005 at 3:11 pm
Hi,
I'm currently using SQL Server 2000 and running the following test.
I have a table, let's just call it Performance, and I have about 350,000 records in it.
I'm creating a DTS Package that has 2 items, a connection, and an execute SQL task.
The SQL task is running a delete statement that says if records are older than 7 days delete them.
I'm creating a variation of this delete statement by using a condition to loop and delete 1000 records at a time until all the records are gone.
Here is where my problem lies. I'm using a desktop machine with very little activity. I need to time how long it takes to delete these records with different variations of the SQL. After I delete the records, a put them back by transferring them from a "backup" table I've created.
Well, after I transfer the records back, and then run the delete again, the records are being deleted faster, like, WAY faster, even if the SQL doesn't change.
For example, I run the DTS package to delete these records out of Performance and it takes 7 minutes (my machine is VERY slow/old). Then I transfer the records back through DTS Wizard from the Performance_backup table.
Then I run the DTS Package to delete again and it only takes 2 minutes this time! The next time it only takes 20 seconds?
What's going on here? I can only assume that this is some kind of inherent performance optimization within the SQL Server, but it's a tad frustrating as it's not really letting me test my scenario.
January 31, 2005 at 3:46 pm
Read BOL on topic DBCC DROPCLEANBUFFERS.
On the 1st test, SqlServer likely has significant disk IO to perform to bring the pages into memory. On 2nd and subsequent tests, the pages are already in memory as a result of the restore from the backup table. Use the above DBCC command to simulate a cold start with nothing cached.
January 31, 2005 at 3:52 pm
Thank you very much I will check the BOL for more info.
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply