December 27, 2005 at 11:58 am
My ADO based C++ application works fine for sometimes 15+hrs just fine. Im using SQL 2000. All of a sudden I dont know what happens all update operatins and insert operations start failing with 80040e31, timeout expired errors.
It seems as though the whole database is locked up.
And after 1hr or sometime 2 hrs everything is back to normal without any intervention.
if i intervene and i stop all my app services. i run sp_who and there are no connections to the database. I restart my application services and still have those 80040e31 errors come up.
From the query analyzer im able to perform operations. like for example
there is a table with just 8 rows.
if i do a
delete from table1 where col=1
it takes 1.30 minutes to delete 8 rows...
Actually this is the problem. so an update also i think takes more than 30s and that is why it times out.
so probably if i increase the timeout to 3 minutes everything will be fine. I dont want to do that. rather i want to find out why the updtes are timing out ...
Any help is appreciated.
Thanks
December 27, 2005 at 11:59 am
one interesting thing i noticed. Update/Insert is locked for all the tables in that database at that time. But once i run an update query in the query analyzer and let it run to finish and it takes like 1.30 minutes then the whole database seems to reset and every other update works fine. after that all my app services are back to normal without even restarting them , because all the updates work fine !!
i dont know if it makes a difference but all my queries use username=sa.
database size: 7030MB
available space:693MB
no maintainance plan
it is set to automatically grow file by 10%
Auto Update statistics set
Torn Page set
auto create stats set
December 27, 2005 at 1:41 pm
is available space in your post space free on the hard disk or in the database itself?
December 27, 2005 at 2:33 pm
on disk
December 28, 2005 at 11:13 am
It could be that your database or logfile fails to autogrow.
10% of 7030MB =703 MB
available space:693MB ->not enough
December 28, 2005 at 2:08 pm
As I mentioned before when this was happening
as soon as i ran an update command and let it finish then everything was back to normal.
one thing i noticed is when i run dbcc opentran i get this
one other interesting thing i noticed is when i run some error...
dbcc opentran
Server: Msg 7969, Level 16, State 2, Line 1
No active open transactions.
Transaction information for database 'TestDB'.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
December 28, 2005 at 2:29 pm
I'm still somewhat suspicious to the low amount of free space.
What happens if you set autogrow size to 100 mb instead of 10%.
Is there something in the sql server logs or event viewer when the lockout happens?
December 28, 2005 at 3:00 pm
Is there any kind of SQL logging i can setup to see if something unusual is happening at that time. it looks like there are no locks held but it is something on SQL which is blocking any and every update statement to run and it blocks it for 2 minutes +
and it is table independent and is database wide.
other databases on the same server are fine.
one other thing is let us say i run the upate statement itself in a begin tran/rollback tran even then everything is reset and app services start working fine. but if i stop the statement before it is completed everything is locked up.
December 28, 2005 at 3:13 pm
Given that you don't have a maintenance plan, I'd suggest doing a re-index or update usage. The statistics used by the query optimizer are probably way out of date.
Then I'd set up some routine maintenance to keep things up-to-date.
--------------------
Colt 45 - the original point and click interface
December 28, 2005 at 4:55 pm
im not sure if this is related . The ldf file for the database is toooooooooooo huge. it is like 8207 MB !!!
anyways i have taken care of that. i have set it to a max of 200mb and grow by 5 mb.
but any ideas on that if it could cause the problems im seeing ?
December 29, 2005 at 12:56 am
one other thing is let us say i run the upate statement itself in a begin tran/rollback tran even then everything is reset and app services start working fine. but if i stop the statement before it is completed everything is locked up.
How do you "stop" it?
Errors During Transaction Processing
If a severe error prevents the successful completion of a transaction, SQL Server automatically rolls back the transaction and frees all resources held by the transaction. If the client's network connection to SQL Server is broken, any outstanding transactions for the connection are rolled back when the network notifies SQL Server of the break. If the client application fails or if the client computer goes down or is restarted, this also breaks the connection, and SQL Server rolls back any outstanding connections when the network notifies it of the break. If the client logs off the application, any outstanding transactions are rolled back.
If a run-time statement error (such as a constraint violation) occurs in a batch, the default behavior in SQL Server is to roll back only the statement that generated the error. You can change this behavior using the SET XACT_ABORT statement. After SET XACT_ABORT ON is executed, any run-time statement error causes an automatic rollback of the current transaction. Compile errors, such as syntax errors, are not affected by SET XACT_ABORT.
It is the responsibility of the programmer to code the application to specify the correct action (COMMIT or ROLLBACK) if a run-time or compile error occurs.
December 29, 2005 at 9:20 am
Interesting thread. Space in the log or data file could be an issue. While SQL Server autogrows, all activity in the db stops, which can appear as "freezing up". this was common in the early days of 2000 when people would build a 1MB db, set to grow by 10% and keep dumping data in there.
You want to size your db and log appropriately. The db should handle some "normal" amount of data. I usually guess for a year so I don't have to monitor this too often, but choose what you think works for you and go with that.
For the log, it should handle your transactions between backups. That means you need to setup log backups. Every hour, every day, whatever you think is needed for recovery, but be sure they are setup.
With a pad for both
December 29, 2005 at 11:05 am
A huge log file was the problem with a client of mine. When it got to about 14 gb, the server would appear to be "hung" (it was really just incredibly slow and there were timeouts) causing the user to restart SQL Server which they started doing daily.
I set up a job with the following t-sql statements. It is scheduled to run once a week at midnight on Saturday night.
BACKUP LOG dbname WITH TRUNCATE_ONLY
DBCC SHRINKDATABASE (dbname, TRUNCATEONLY)
They no longer have any problems.
December 29, 2005 at 12:21 pm
DO NOT use "with TRUNCATE_ONLY" !!!!
That will break point in time recovery. If you run those statements then you may be better off setting the recovery mode to simple and be done with it without any job
Cheers,
* Noel
December 30, 2005 at 6:44 am
Interestingly enough now that i have set the max to 300mb the app services are running fine for 2 days now.
so the log db growth by 10% was the problem. I never even noticed that till i saw my space on the disk shrinking !!!
Viewing 15 posts - 1 through 15 (of 15 total)
You must be logged in to reply to this topic. Login to reply