April 23, 2012 at 8:54 am
Hello all
actually all I did was to reate a loop that read the data 10000 records at the time, that allow me to use one of the main indexes so it was much faster than a single select, ( specially since I was creating really big logs because of that...)
Do you need me to post some code?
April 23, 2012 at 10:20 am
ricardo_chicas (4/23/2012)
Hello allactually all I did was to reate a loop that read the data 10000 records at the time, that allow me to use one of the main indexes so it was much faster than a single select, ( specially since I was creating really big logs because of that...)
Do you need me to post some code?
If you can make the time, it would be helpful to others who may read this thread. There's nothing like getting such a solution straight from the horse's mouth. Thanks.
--Jeff Moden
Change is inevitable... Change for the better is not.
April 23, 2012 at 11:29 am
hi ricardo, have you utilized Waitfor Delay to try to avoid concurrency locking issue?
We are a BIG DATA company as well and have similar requests pulling data from large volumns of data.
Cheers
Hui
April 24, 2012 at 8:26 am
ricardo_chicas (4/23/2012)
Hello allactually all I did was to reate a loop that read the data 10000 records at the time, that allow me to use one of the main indexes so it was much faster than a single select, ( specially since I was creating really big logs because of that...)
Do you need me to post some code?
This is a VERY common mechanism to process (select or delete) large volumes of data. A thousand 10K-row-index-seek-bookmark-lookup hits can be WAY faster, with less memory used, locking, etc. than trying to access 10M rows at once with what could be massive hash joins that wind up spilling to tempdb.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
Viewing 4 posts - 31 through 33 (of 33 total)
You must be logged in to reply to this topic. Login to reply