October 3, 2003 at 12:03 pm
(SQL Server 2000)
Using the profiler trace, I can see that my DELETE is taking the vast majority of time in my entire sequence of operations. Basically, via ADO.NET, I'm checking for the presence of a record in a table, then I "DELETE FROM User WHERE id=123", which is cascade deleting all the data through 4 other tables (around 40 records total for each delete).
My delete is doing over 5000 reads and taking about 500ms per delete. Unfortunately, I am deleting 18000 records this way, one-by-one. I MUST do it in this fashion - one at a time. So that can't change.
What I need to do is decrease the amount of time each DELETE is taking. I've tried some index changes, but to no avail.
Thinking I might do better, I wrote my own Cascade delete stored proc, removed the "on delete" on all the contraints, and tried again. The SQL Server cascade still did better - which is not good enough.
Any ideas on how to speed up a cascade delete? Indexes? Some guru trick? Or am I out of luck?
Edited by - mganley2000 on 10/03/2003 1:14:56 PM
October 6, 2003 at 6:46 am
Just a stupid question:
- Are all your indexes participating in the primary-forign key pairs all right and well defined?
- Do you have triggers defined on the participating tables
If your responses are OK for thoses questions then maybe you should post here the DDLs of your tables including the triggers and indexes and dependecies and mark the table you are initiating the original delete.
Bye
Gabor
Bye
Gabor
October 13, 2003 at 9:52 am
loks like your tables do internal restructuring after the delete. What are your Indexes and their fillfactors ? I would experiment with different values for the fillfactor on the clustered index. Maybe you can avoid data reshuffling after each single delete and allow more "holes" in the table.
Another Idea: Put tables, which could be deleted in parallel, on different physical devices.
If nothing helps, mark the antries as "invalid" and delete them in a overnight batch-job.
Kay
October 14, 2003 at 9:46 am
I experimented with the indexes...then I tried the "fill factor" suggestion. I needed to do a little bit of research to see what the side effects of changing this from the default. The docs for SQL Server mentioned some implications. Tried some low values and got performance that was actually a little better. But nothing groundbreaking. Good suggestion however! I have a web service reading the data - and it was a bit slower to do reads after my changes. Not too bad though.
Anyway - the last suggestion is what I figured I'd have to do anyway. Basically, within my transaction, I mark the main record as "invalid", do my inserts, and mark the new main record as "valid". Done. Repeat thousands of times.
Then, only do one big DELETE at the end where records are marked as invalid. It is fifty times faster - which is expected of course. I'm only doing one big delete, rather than a few thousand cascaded deletes on-the-fly. A better design this way - not quite as elegant - but actually more solid. Makes more sense.
Thanks for the suggestions. I've learned quite a lot regarding performance tuning in just the past week...
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply