Best way to select a large amount of data?

  • Hello all

    actually all I did was to reate a loop that read the data 10000 records at the time, that allow me to use one of the main indexes so it was much faster than a single select, ( specially since I was creating really big logs because of that...)

    Do you need me to post some code?

  • ricardo_chicas (4/23/2012)


    Hello all

    actually all I did was to reate a loop that read the data 10000 records at the time, that allow me to use one of the main indexes so it was much faster than a single select, ( specially since I was creating really big logs because of that...)

    Do you need me to post some code?

    If you can make the time, it would be helpful to others who may read this thread. There's nothing like getting such a solution straight from the horse's mouth. Thanks.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • hi ricardo, have you utilized Waitfor Delay to try to avoid concurrency locking issue?

    We are a BIG DATA company as well and have similar requests pulling data from large volumns of data.

    Cheers

    Hui

  • ricardo_chicas (4/23/2012)


    Hello all

    actually all I did was to reate a loop that read the data 10000 records at the time, that allow me to use one of the main indexes so it was much faster than a single select, ( specially since I was creating really big logs because of that...)

    Do you need me to post some code?

    This is a VERY common mechanism to process (select or delete) large volumes of data. A thousand 10K-row-index-seek-bookmark-lookup hits can be WAY faster, with less memory used, locking, etc. than trying to access 10M rows at once with what could be massive hash joins that wind up spilling to tempdb.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

Viewing 4 posts - 31 through 33 (of 33 total)

You must be logged in to reply to this topic. Login to reply