June 26, 2008 at 8:20 am
We have a transactional database that encounters data loads almost daily. These data loads are initiated by end users, and cannot really be scheduled for off-hours.
The problem is, the data load monopolizes server resources, and other "regular" users of the OLTP system experience timeouts and very slow performance at best.
Is there any way to throttle back (in SQL 2005) the resources utilized by a single process or user?
thanks!
June 26, 2008 at 8:36 am
There's a resource govenor new in SQL Server 2008. Don't think there is anything avaialble in 2005
Gethyn Elliswww.gethynellis.com
June 26, 2008 at 8:44 am
Create a Database SNAPSHOT.That should let the other users Query the Database for SELECTS only.
Maninder
www.dbanation.com
June 26, 2008 at 11:17 am
How are the data loads being done? Java front end app with a JDBC connector?
We had issues with a datastore that is also used for reporting that had some horrible code used to do inserts ( looping through millions of records to find duplicates for each insert ) that was taking huge amounts of time to process and the reporting was, as you say, experiencing timeouts and connectivity problems.
Find out what is beating up your server, CPU or I/O.
If it's cpu chances are that there is some code that needs to be tweeked or some indexes to be added.
If it's I/O...might want to think about adding a staging table on a separate set of disks and inserting the data into the real tables off hours.
A combination of code rewrites and additional indexes brought the load times from 24+ hrs to 4hrs. Be warry that more indexes can slow down updates..but this should get you on the right track
_______________________________________________________________________
Work smarter not harder.
June 26, 2008 at 8:17 pm
The real problem is that you're using the same table for OLTP and Batch processing. Load the data into a separate table (staging table), process it, the transfer the final results to your OLTP table.
We had the same problem where I work... code ran for 30 minutes, 4 times a day, and cause 10 minute long server wide "blackouts" each time. Using the method I've identified above, the code ran in 3.91 seconds...
--Jeff Moden
Change is inevitable... Change for the better is not.
February 24, 2010 at 2:16 am
It's also worth making sure that you're using transactions properly when loading the data - we have to deal with data loads every half hour, so need to keep physical table writes (and therefore table locks) to a minimum.
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply