April 4, 2017 at 11:59 am
Hi all,
I read a lot about delete performance delete only top n rows, delete using unique key. My problema at this moment when i ran a delete on a large table takes 20 minutes if i ran a insert and delete again with same number of row the delete runs much faster like 10 minutes, i didnt understand why delete performance change, SQL cache ?
April 4, 2017 at 12:35 pm
I would venture a guess that statistics were updated so the delete was easier or that the values you were deleting were indexed better the second time. But there could be locks, blocking, waits, other DB activity, other server activity, other disk activity, etc. There are a LOT of factors. I've run the exact same select query 10 times and had it range from 1 second to 15 minutes due to blocking and server/disk activity.
When you say that you ran the insert, I am interpreting it as that you are deleting the top 10000 records (I'm just estimating) and it takes 20 minutes. Then you re-insert those exact same 10,000 records and delete them again, correct?
If so, I would be willing to bet that you have auto-update statistics turned on and that insert was large enough that it caused the statistics to update which should cause all of your queries against that table to perform faster.
What you could try instead of doing your method (delete, insert, delete) would be:SET STATISTICS IO ON
SET STATISTICS TIME ON
BEGIN TRANSACTION
<DELETE COMMAND>
ROLLBACK TRANSACTION
SET STATISTICS IO OFF
SET STATISTICS TIME OFF
no data changes that way and you can see the IO and timing of things. Then update your statistics and see if you get similar results or faster.
You should also turn on the actual execution plan and compare it in both examples.
The above is all just my opinion on what you should do.
As with all advice you find on a random internet forum - you shouldn't blindly follow it. Always test on a test server to see if there is negative side effects before making changes to live!
I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.
April 4, 2017 at 3:06 pm
If you have sufficient memory and poor IO then buffer pool cache could be some of the reason.
Best is to repeatedly run sp_whoisactive during any long-running processes to see what is happening with the query and the amount of work it is doing, blocking, etc.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
April 5, 2017 at 7:14 am
thnaks for your sugestions i will try the sugestion of bmg002.
In my case i didn't have wainting deadlocks and so one, i have a develop server (10GB Ram) when i restore the database and all my tests was executed in the same server and same database and was only me connectes to this database. When i said that runs insert again i was talking about 1,4 M records , this is a reporting table rebuild every day.
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply