September 7, 2005 at 6:56 am
I have a long running query that runs anywhere from 1 min - 5 min before terminating with this error message. This connection break has just started in the last few days otherwise this query has run successfully for a long time
[Microsoft][ODBC SQL Server Driver][Named Pipes]ConnectionRead (WrapperRead()).
Server: Msg 11, Level 16, State 1, Line 0
General network error. Check your network documentation.
Connection Broken
Even though it says 'general network error', I can find a problem on our network besides other shorter running queries complete successfully and our production application that runs against this database functions w/o errors.
From the message it appears to be an odbc issue maybe but i am not sure where to go from here. Does anyone have any suggestions as to troubleshooting this problem?
Thanks for your help in advance
Pete
September 7, 2005 at 8:38 am
Have you tried increasing the "query time" in your odbc configuration to see if this fixes it ?!
**ASCII stupid question, get a stupid ANSI !!!**
September 7, 2005 at 10:08 am
i change the 'long query time' to 99999 the max value and it did not make any real difference. the query i am running has successfully run before on many ocassions. it basically does a 'dbcc showcontig' on all of the tables in may production sap database.
this is what i have been running:
-- Declare variables
SET NOCOUNT ON
DECLARE @tablename VARCHAR (128)
DECLARE @execstr VARCHAR (255)
DECLARE @objectid INT
DECLARE @indexid INT
DECLARE @frag DECIMAL
DECLARE @maxfrag DECIMAL
DECLARE @maxdensity DECIMAL
DECLARE @maxpages DECIMAL
-- Decide on the maximum fragmentation to allow
SELECT @maxfrag = 30.0
SELECT @maxdensity = 80.0
SELECT @maxpages = 50000
-- Delete Z_FRAGLIST ENTRIES
DECLARE Z_FRAGLIST_cursor CURSOR
FOR SELECT * FROM Z_FRAGLIST
OPEN Z_FRAGLIST_cursor
FETCH NEXT FROM Z_FRAGLIST_cursor
-- Check @@FETCH_STATUS to see if there are any more rows to fetch.
WHILE @@FETCH_STATUS = 0
BEGIN
DELETE Z_FRAGLIST
-- This is executed as long as the previous fetch succeeds.
FETCH NEXT FROM Z_FRAGLIST_cursor
END
CLOSE Z_FRAGLIST_cursor
DEALLOCATE Z_FRAGLIST_cursor
--- end z_fraglist record delete
-- Declare cursor
DECLARE tables CURSOR FOR
SELECT TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE'
-- Open the cursor
OPEN tables
-- Loop through all the tables in the database
FETCH NEXT
FROM tables
INTO @tablename
WHILE @@FETCH_STATUS = 0
BEGIN
-- Do the showcontig of all indexes of the table
INSERT INTO Z_FRAGLIST
-- 3/15/04 EXEC ('DBCC SHOWCONTIG (''' + @tablename + ''')
-- WITH FAST, TABLERESULTS, ALL_INDEXES, NO_INFOMSGS')
EXEC ('DBCC SHOWCONTIG (''' + @tablename + ''')
WITH TABLERESULTS, ALL_INDEXES, NO_INFOMSGS')
FETCH NEXT
FROM tables
INTO @tablename
END
-- Close and deallocate the cursor
CLOSE tables
DEALLOCATE tables
September 7, 2005 at 1:45 pm
a couple of suggestions:
is a log file being generated that you could perhaps take a look at to see if there's any more information?!
here's something from microsoft for similar errors during backup/restore - here, everything seems to point to the mdac version you may have:
**ASCII stupid question, get a stupid ANSI !!!**
September 8, 2005 at 10:28 am
I had a similar situation with a query that was processing a little more data each day. Eventually it ran longer than the default timeouts would allow so I had to set them to larger values. using VB6, ADO and MDAC 2.8; on the connection object, I have these settings:
DatabaseConnection.ConnectionTimeout = TimeOut
DatabaseConnection.CommandTimeout = TimeOut
DatabaseConnection.Properties("General Timeout").Value = TimeOut
now if it happens i just tell the program to use a larger value in TimeOut. The value here is in seconds, so it sounds like you want to start with a number larger than 300 to cover those five minute run times.
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply