Database Timeouts

  • We have an issue that pops up every once in a while where we have inserts that are causing database timeouts. The message is: [Microsoft][ODBC SQL Server Driver]Timeout expired. I have tried to look and see if there is locking during this time but it doesn't look like it. Also the processor itself did not seem to be busy either. The timeouts appear to be happening on 2 tables one has a record count of 97,000 records and one has a record count of 819,000 records. Is there anything else I should be looking at to help determine what is causing the problem?

  • See the timeout perion set on sql server and u might need to increase that or the timeout period in query analyser if u r using one for running the query. As look at the indexing on the tables and fragmentation. Good luck.

  • I had the exact same problem with an ASP application updating tables almost exactly the same size. I did a couple of things:

    1. Added indexes where appropriate to speed up queries

    2. Increased the SQL timeout value on the database Connection object in ASP.

    3. Set up a PK->FK relationship between the tables to take the place of Delete logic on Table A having to also Delete records in Table B.

    4. Set up an Insert Trigger on Table A to perform inserts in Table B based on data inserted in Table A.

    Ranked from easiest to most difficult (none of them being terribly hard).

    I don't know the table structure you're working with or if you're using ASP, but some of these may help.

     

  • we had also a similiar problem while calling BULK INSERT from a C++ program. and after increasing the timeout in the ADO connection object in C++ solved this.

    Linto

  • here is the solution of the Hang Problem between any Program and SQL server 2000 or 7 🙂

    When you want a report which based on complicated queries joined from several tables, These tables are updated frequently, then you lock these tables (or you will wait for previous transaction lock on these tables to be completed to put your locks on them) .. so all other poeple using these tables are locked too (so hang happened - because they are in the queue of lock of these tables)!!

    the best thing is to run the query on the server (by making Pass-Through Query in Access for example or by calling a view on SQL server) .. and with each table name in the query, add this statement "with (nolock)" .. so you read data as it is (not recently updated = not commited transaction), and you don't wait for locks to be free and you don't lock these tables, and any other transaction will complete successfully (for you and others) 🙂 .

    you will write something like this in your Pass-Through Query in Access:

    select Invoices.*, Situation.*

    from Situation with (nolock) INNER JOIN Invoices with (nolock)

    ON Situation.SituationID = Invoices.Situation

    where Situation.SituationID =1

    Disadv:

    - when using "with (nolock)", you will not seen comitted transaction (changed one) at this moment ... but you can see it at any other time.

    Adv:

    - no hang at all

    - very fast Response

    - Little summary for Locks in SQL Log file.

    also : you may need to add more Memory (RAM) when server still hangs a little after transfering your queries to path-through queries.... becuase every transaction or query is done on server first, then processed data is passed to your PC.

    ALSO:

    try to put the DATA and the LOG file of SQL Database on a partition that is Formatted with 32 K or 64 K Cluster Size (not the defalut = 4 K)

    because the default Page size in SQL in memory is 8K ,

    And If u use the defulat Cluster Size while formating partioin, then to get 1 page (8K), you need 2 hits for the HardDisk (4 K each)

    But If u use 32 K Cluster Size while formatiing partioin, then on 1 hit for the HardDisk (32 K each) , you get 4 pages into Memory (more faster ... because the slowest thing that slow the whole system is to read/write from Hard Disk)

    I use 32 K not 64 K.. because with 64 K you may have a lot of empty pages in memory ( lost in memeory and your DB may be graw more)

    this of course will help you to solve the problem of hang more and more .. bye

    Alamir Mohamed

    Alamir_mohamed@yahoo.com


    Alamir Mohamed
    Alamir_mohamed@yahoo.com

  • Thanks for the responses...Unfortunately we are still seeing the problem. However I have a little more detailed info. This has happened to us at times when there seems to be little, to no activity on the database. We fix it by finding the table that seems to locked and either creating a new index or modifying an index. Once that index is created evrything runs fine for a period of time. With that in mind I rebuilt all the indexes on our major tables on Sunday night. We were fine all day Monday - then this morning very early with almost no activity the issue popped up again. One of my colleagues rebuilt one index on a table and the problem went away.

    We get the timeout message when we are trying to insert 1 record into a table. The timeout is set to 30 seconds - I don't think we need to modigy that, 30 seconds for an insert is more then enough time.Any ideas?

    Edited by - balberts on 11/18/2003 1:39:48 PM

    Edited by - balberts on 11/18/2003 1:40:57 PM

  • Hi, without more information everything is just a guess. But, I'd say, since re-building your indexes helped for a little while that you may have a couple of problems that need attention.

    My first suggestion would be to try and determine exactly what the queries are doing - i.e. the execution plans.

    SQL may end up using the "correct" index when the index has been defragmented (or rebuilt), but might be doing something nasty the rest of the time.

    There may be missing statistics that SQL needs to determine the best execution plan. You can use SQL Profiler to determine whether statistics would be useful, as well as execution plans for long running queries.

    If statistics don't seem to solve your problem you can also force an index SELECT ... (NOLOCK INDEX=ix_index).

    Also check the basics:

    1. Log file & DB on different disks (RAID if possible)

    2. Set up perfmon counters to detect differences in load, both on physical disks and CPU etc.Perfmon counters can be monitored from a remote machine to minimize the impact on the production server. Configure it to log to disk, and rollover every hour or so. This then gives you the oppertunity replay any counter logs for specific periods (i.e. when the problem occurs).

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply