Speed problems with SSIS 2008 ole db destination package

  • For the data flow containing the largest transformation (i.e. the table with 900,000 records) the DefaultBufferMaxRows is 10,000, DefaultBufferSize is 3145728. I've set Rows per batch to be 1000 and maximum commit size to be 1000 as well on this big table. ARe the buffer sizes about right? Shall I remove the values therefore from Rows per batch and maximum commit size? Do the same on each data flow and table destination?

    HowardW (2/22/2012)


    Check the settings on your OLE DB destination. Remove any value from Rows Per Batch and make sure Maximum commit size is zero.

    Also, it might be worth looking at the properties tab of the data flow as a whole to make sure the DefaultBufferMaxRows and DefaultBufferSize parameters are set high enough

  • imho 1000 is way too low...depending on available resources, it should be closer to 50,000 or even 100,000.

    You will have to play around with these to get it right. Look at the posted links...there's some really good info there.

  • Would that just be for the large tables i.e. the 250,000, 500,000 and the 900,000 tables or all would you say? I'll have a good read of those articles now.

    Martin Schoombee (2/22/2012)


    imho 1000 is way too low...depending on available resources, it should be closer to 50,000 or even 100,000.

    You will have to play around with these to get it right. Look at the posted links...there's some really good info there.

  • Griffster (2/22/2012)


    Would that just be for the large tables i.e. the 250,000, 500,000 and the 900,000 tables or all would you say? I'll have a good read of those articles now.

    Martin Schoombee (2/22/2012)


    imho 1000 is way too low...depending on available resources, it should be closer to 50,000 or even 100,000.

    You will have to play around with these to get it right. Look at the posted links...there's some really good info there.

    Not just for large tables...let me give you an example: If I give you a bucket that can take 100 gallons of water at a time, and I ask you to fetch 1,000 gallons of water...why would you only fill the bucket with 10 gallons each time, and ultimately take more trips?

    The buffers are the same. Depending on available resources, they can handle a certain amount of data. Not making sure that you fill the buffer all the way would mean more round trips between your source and destination. There is no magic number here, and it depends very much on the 2 systems (source and destination)...but with memory these days a 1000 records (unless there are many fields) is very little in comparison to the amount of memory/space you have available in the buffer.

    I usually start big (say 100,000 records) and work my way down, testing whether performance increases or decreases. There is another side to this too...deciding how many records to insert into the destination at once (max commit size). Again some testing and experimentation is required...a value of 0 means it will attempt to insert all the records at once, or as much as it can keep in memory at a time. That's usually a good starting point.

    Hope this makes sense. 🙂

  • That's great advice Martin. I'll experiment and see how I get on.

Viewing 5 posts - 16 through 19 (of 19 total)

You must be logged in to reply to this topic. Login to reply