June 1, 2012 at 4:18 am
Hi All,
I have (two hundred and 29 million ) records in a table. we have a procedure which is updating the data in the table. there are 4 updates running inside the procedure. It is taking around 3 days to complete the script. How I can optimize the script?
1. I have a clustered index in the table
2. 3 Non clustered index in the table
3. I used database page compression
There is no missing indexes in the table. I went for another approach ie batch update using while loop. It is also taking extra time. I am restricted to partition the table. Is there any other approach to make the procedure in 1 0r two hours?
Regards,
Varun R
June 1, 2012 at 5:01 am
Asumming these updates are based on particular period of time say on 6 months of data , but it is scanning older data as well then probably partition is way to go.
Also seen in most cases the batch updates are based on where clause which runs for hours selecting the rows which needs updation if this case it would be good idea to check how many rows it is fetching and does it do index scan.
Cheers
Sat
Cheer Satish 🙂
June 1, 2012 at 5:25 am
Partitioning is not for performance.
Please post query, table definitions, index definitions and execution plan, as per http://www.sqlservercentral.com/articles/SQLServerCentral/66909/
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
June 1, 2012 at 5:31 am
This is a kind of data warehouse table. we are creating reports after running this procedure. It is doing Index scan too.. There is an auto increment column named ID , I had done batch (100000) update by putting where condition in the column ID . In the execution plan , it is updating the clustered index.That's also taking time.
June 1, 2012 at 7:02 am
Have you tried reducing the 4 updates into a single one, possibly even using the merge statement to do any needed insert/delete actions in one single pass too?
And what sort of updates are you doing? Does your server need to grow your db or the tempdb to perform your updates? In this case you could go and pre-allocate the db(s) some more space to make it speedier.
Possibly removing all indices (but leave the primary key) from the table(s) before running the update and recreate them afterwards can help a great deal too.
June 1, 2012 at 7:31 am
Per Gail's request above, please post query, table definitions, index definitions and execution plan, as per http://www.sqlservercentral.com/articles/SQLServerCentral/66909/[/quote]
Please understand that we can't see from here what you see there, unless you want to grant us VPN access and sys admin rights to your server (which I'm pretty sure you can't and won't). You have to help us help you.
June 1, 2012 at 3:40 pm
Varun R (6/1/2012)
Hi All,I have (two hundred and 29 million ) records in a table. we have a procedure which is updating the data in the table. there are 4 updates running inside the procedure. It is taking around 3 days to complete the script. How I can optimize the script?
1. I have a clustered index in the table
2. 3 Non clustered index in the table
3. I used database page compression
There is no missing indexes in the table. I went for another approach ie batch update using while loop. It is also taking extra time. I am restricted to partition the table. Is there any other approach to make the procedure in 1 0r two hours?
Regards,
Varun R
How many rows are you trying to update with each UPDATE statement?
--Jeff Moden
Change is inevitable... Change for the better is not.
June 3, 2012 at 7:50 pm
Thanks for all your answers. I will try give more information in coming days.
June 19, 2012 at 5:59 pm
Varun R (6/3/2012)
Thanks for all your answers. I will try give more information in coming days.
Just checking. You all set?
--Jeff Moden
Change is inevitable... Change for the better is not.
June 20, 2012 at 1:12 am
No jeff ........still working on it.
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply