September 27, 2004 at 9:43 am
I am supporting an organization of about a dozen users on SQL2K. When one user submits a transaction, everyone else has to wait for the transaction to complete before they can continue. A large transaction will insert a few hundred lines, and can cause a wait of up to two minutes. In other environments I have done bulk inserts of several thousand items in a much shorter time, so this seems a bit excessive to me. The servers have dual Xeons, two gigs of RAM and a five drive RAID, so I don't think it's a horsepower issue. The users are all logged into one server using Citrix sessions, and the SQL sits on a separate identical box. The database and app were written by a third-party place that got acquired a couple years back, so there is no support available.
I have made SQL Profiler traces of some of these transactions, but I really don't know what I'm looking for. The on-site admin seems to think it's a locking issue, and there are large numbers of locks in the profiler logs. Can someone give me a shove in the right direction?
September 27, 2004 at 11:42 am
Is the "large transaction" crossing multiple batches, or is it a single batch that can generate many inserts? If the latter, is it many individual inserts, or an INSERT INTO SELECT FROM construct? Although interaction with a large number of smaller transactions on the server could lend itself towards a consistent momentary blocking issue that delays your insert transaction I would personally doubt it to be the case, but many others here are better informed than I about locking and blocking.
My first guess would be that there is a performance issue in the transaction itself. If all you are doing is sending a single batch with a large list of INSERT statements then you can likely eleminate this. I suppose if your index(es) are very severely fragmented then they could be causing the performance problem. Try DBCC SHOWCONTIG, DBCC INDEXDEFRAG, or DBCC REINDEX.
If you are sending individual batches for each INSERT then it could be all the overhead with the cross server communication which could possibly be exacerbated by a congested network, bad NIC, bad driver, or other such issue. It also means that every single INSERT needs to have an execution plan generated for it which would be a little less efficient than having it be generated in bulk. Worse case scenario the application is actually releasing the connection after every INSERT and getting a new connection for each INSERT.
If you are performing other SQL to prepare the INSERT (such as running a cursor or an INSERT INTO SELECT FROM) then it could be a problem with that SQL. If you have an example batch put it within Query Analyzer and get an execution plan for it. Try running the SQL without the actuall INSERT line to see if simple performing the prep work is what is consuming the majority of the time.
I'm not sure if any of this will help you, but this shotgun approach may hopefully partially hit the target.
September 29, 2004 at 2:34 am
I agree with Aaron. See if the application is sending one insert at a time with an open and close connection for each insert. That will defintely slow things down. You can use the trace to see how the statements are being sent to the server.
Do you have triggers that execute after an insert or many constraints? That can also slow down bulk inserts.
Another issue could be statistics. If the table is constantly being inserted and deleted, it can throw off the statistics which the server uses to optimize it's querying. You may need to recalculate statistics after a large insert if it is happening very frequently.
September 29, 2004 at 3:34 pm
Try this in queryanalyser when the delays occur:
select * from master..sysprocesses
This will list the current connections and show the status.
Look for tasks with "blocked" <> 0
Then check the "waittime" which is the number of milliseconds the task has been waiting.
If you have blocked jobs with "waittimes" of a minute or two (i.e. over 60,000) then the the insert job is probably blocking for the full duration of the insert.
If this is the case then the best solution is to rewrite the part of the app doing the insert which may not be possible if you bought in the system and don't have the source code.
If the "waittimes" are shorter, say under 5 seconds (5000) then the inserts are being done in small transactions and it the volume of transactions that is causing the delays.
If the thewre are no blocks or the waittimes are very short , say under 1 seconds (1000) then the chances are that it is either network or badly desiged client opening & closing connections all the time.
October 1, 2004 at 8:59 am
It sound like a performance issue. Once you identified the user spid and the query that he or she is running by using sp_who2 active and DBCC INPUTBUFFER performed the following steps:
1. Go through the query or store procedures to determine lists of table that the script is referencing.
2. Make sure that there are supporting indexes on that table
3. Check whether the queries or store procedure is using the indexes. Check if the query is doing "table scan" or "Clustered Scan". If the query is using "table scan", you must rebuild the indexes. For the best performance, always run UPDATA STATISTICS o the table after rebuilding the indexes.
If the user transaction is still taking a long time:
1. Check for disks space where datafile and log file reside. As for database file, check for auto growth.
2. Make sure that there is no other bulk_logged operation going on the same time.
3. Make Auto Shrink and Auto Close are unchecked. I also suggest to uncheck update statistics and schedule a separate job during off peak hours.
Let me know if this help.
Kim
October 1, 2004 at 2:23 pm
For Locking problem, Find out the two SPID and try to understand what is the two transaction are doing and why did that happened. Hints: sp_who2 active and DBCC INPUTBUFFER then check for proper indexes of the tables that two queries are referencing. If necessary rebuild indexes and updata statistics as mentioned in the last post.
You might want to check for triggers that would slow performance everytime it fired.
Hope it helps
Kim
October 6, 2004 at 11:03 am
OK, here's some results:
The 'select * from master..systemprocesses' shows some blocked processes with wait times in the few-seconds range, which indicates (according to Douglas) many small transactions. I am inclined to believe this, since the 'DBCC INPUTBUFFER' on the user (a la Kim) returned something different every time I executed it, seeming to cycle through various SELECT, UPDATE, and INSERT statements. I am looking at the transaction logs using Apex. That gives me info on the INSERT and DELETE transactions and which table they're being performed on, but not the user (always 'dbo'), or any idea of what actual statements are causing the transactions.
Can anyone suggest a way to view the statements? I was thinking of writing a procedure to continuously record the 'DBCC INPUTBUFFER' for a given user on demand, but I figure there might be a better way to approach it. I am working on getting the source code for the app, but I can't be sure if it will materialize.
October 6, 2004 at 11:19 am
SQL Profiler comes to mind. I'm not terribly adept at using Profiler so I won't presume to tell you what to record or what filter to use, but it seems like that is the functionality you are asking for. If you record to a SQL table you can view the results a bit easier.
October 6, 2004 at 3:34 pm
Yes, SQL Profiler is the right tool to use next.
The easiest way to use is start with the default options and add bits and pieces to it until you get the effect that you want. Then set run the trace for a while. If you get a lot of data or you want to keep it, you cans save it yo a table for later analysis.
Start a new trace,
Connect to the server
Go to Events Tab
The default events as a good starting point:
Audit Logon
Audit Logoff
ExistingConnections
RPC:Complete
TSQL:BatchComplete
Run the trace and look for transactions from your app so you can filter it.
Once you figure out how to identify the transactions from your app, probably by user ID or application name then you can stop the trace, and use File-> properties to edit the values. Use the filter tab to set user id or whatever to teh apprpopriate value and restart the trace.
Next, look at the details of the trnasaction in the lower window. From your description, you should now be able to see the stetements in full. If your app is using stored procedures (it doesn't sound like it is), you will only see the EXEC statements which are useful but not very informative. You can get more details by adding a couple more events so stop the trace again and go to the events tab. then add:
SP:StmtCompleted
TSQL:StmtCompleted
Transaction:SQLTransaction
will probably be usefull too.
-------------
I wonder if this is the 'classic' insert contention problem. If several users are all trying to insert to the same table, and if the table has a clustered index, and if the index values of the new records are similar as is the case with an incrementing identity column, then you can get blocking which would fit your description.
It is caused because SQL must lock the page and indexes which are being update with the new record for the duration of the transaction and this blocks the next user.
The solution is to remove the clustered index from the identity column and replace it with an non-clustered index. This should not be undertaken lightly though as the performance could plummit.
It could be any number of other problems too. We will know more once you get the trace from profiler.
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply