September 12, 2011 at 7:50 am
Ninja's_RGR'us (9/12/2011)
My method will go way more precise than this.That code keeps history of previous work. So after a few runs, you can check the history to see avg run time for that particular index and then ajust with the current page count.
Assuming no changes on the ressources you should get an accurate estimate to within a few seconds.
Then you would't blow the window by more than a few seconds (if at all).
Also you can sort the work order and work only on the most critital / fragmented tables.
Whoops, sorry Ninja, I was replying to Gianluca Sartori originally and lost track of who had answered my reply. :ermm:
Rich
September 12, 2011 at 7:52 am
Still good content to read! 🙂
September 12, 2011 at 7:57 am
rmechaber (9/12/2011)
Gianluca Sartori (9/9/2011)
You can start the job with sp_start_job and wait for its completion in a loop.If the job exceeds the timeout you set, stop it with sp_stop_job.
But don't you have to wait for a rollback if it's part-way completed then?
Rich
I believe this is the case. I'm not sure how much of a rollback occurs, and it depends on when the process is. If it's allocating the new index, it should be able to just drop the pages.
Viewing 3 posts - 16 through 17 (of 17 total)
You must be logged in to reply to this topic. Login to reply