March 13, 2008 at 10:02 am
Not sure if this applies to your specific PowerEdge/PERC model, but on the 2850's there's a firmware upgrade:
RAID_FRMW_LX_R169531
Fixes and Enhancements:
1. Fixed an issue of system lock ups and controller hangs caused due to
single bit/multi bit errors in some configurations.
2. Modified the PERC Option ROM to improve CHS Cylinder/Head/Sector) to
LBA (Logical Block Address) translation, to add support for newer disk
partition utilities like Diskpart 2
Just a long shot actually.
March 13, 2008 at 10:05 am
Jason Lennan (3/13/2008)
Not sure if this applies to your specific PowerEdge/PERC model, but on the 2850's there's a firmware upgrade:RAID_FRMW_LX_R169531
Fixes and Enhancements:
1. Fixed an issue of system lock ups and controller hangs caused due to
single bit/multi bit errors in some configurations.
2. Modified the PERC Option ROM to improve CHS Cylinder/Head/Sector) to
LBA (Logical Block Address) translation, to add support for newer disk
partition utilities like Diskpart 2
Just a long shot actually.
We've got that equivalent firmware installed I'm sure... there's a team of 8 guys who do nothing but keep our servers patched.
March 13, 2008 at 10:07 am
Wish I had that 🙂
Only thing I can think of then is have perfmon running watching standard things like CPU, Hard Drive usage, as well as SQL counters until it happens again.
March 14, 2008 at 10:35 am
mtassin,
One thing that *may* help shed some light as to SQL Server's state if the freeze happens again is to generate a "mini dump" (I know, I know, I didn't name it.) of the SQL system while the freeze is happening.
For SQL Server 2000 systems:
How to use Sqldumper.exe to generate dump files for Windows applications
http://support.microsoft.com/kb/827690
For a mini-dump file, run Sqldumper.exe by typing the following command:
\..\80\com\sqldumper.exe ProcessID 0 0x24 0 ..\LOG
Note that you need use version 6.0.17.0 or later of the Dbghelp.dll file located in the same directory as the Sqldumper.exe file. Do yourself a favor and set this up ahead of time. I had to perform all of this during a conference call with our Server team, and several vendor staff, from a developer's workstation. Not what I had planned for that afternoon. If you need a copy of the Dbghelp.dll send me a private message, and I can get a copy to you.
For SQL Server 2005 systems:
How to use the Sqldumper.exe utility to generate a dump file in SQL Server 2005
http://support.microsoft.com/kb/917825
Analyzing the output of the dump file is for another thread ...
Hope This Helps,
"Key"
MCITP: DBA, MCSE, MCTS: SQL 2005, OCP
March 14, 2008 at 11:30 am
March 14, 2008 at 12:30 pm
SQL Server 2000 Operations Guide: Monitoring and Control
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlops5.mspx
SQLDiag.exe
To get a good idea of the current state of your SQL Server, including current user information, DLL versions, configuration information, and database size information, you can run a utility called SQLDiag.exe. The default location for this utility is C:\Program Files\Microsoft SQL Server\Mssql\Binn\SQLDiag.exe. When you run this utility, the following messages appear in a command prompt window:
Connecting to server (YOUR SERVER NAME)
Getting file C:\Program Files\Microsoft SQL Server\MSSQL\log\ERRORLOG
Getting file C:\Program Files\Microsoft SQL Server\MSSQL\log\ERRORLOG.1
Getting file C:\Program Files\Microsoft SQL Server\MSSQL\log\ERRORLOG.2
Getting file C:\Program Files\Microsoft SQL Server\MSSQL\log\ERRORLOG.3
Getting file C:\Program Files\Microsoft SQL Server\MSSQL\log\ERRORLOG.4
Getting file C:\Program Files\Microsoft SQL Server\MSSQL\log\ERRORLOG.5
Getting file C:\Program Files\Microsoft SQL Server\MSSQL\log\ERRORLOG.6
Getting registry information
Getting library version information
Getting configuration information
Getting current user information
Getting lock information
Getting database information
Getting product information
Getting extended procedures information
Getting process information
Getting input buffers
Getting head blockers
Getting machine information. Please wait; this may take a few minutes
Data Stored in C:\Program Files\Microsoft SQL Server\MSSQL\log\SQLdiag.txt
After this file runs, it leaves a file on your server that lists all the detailed information about your server that you might ever need. If you contact Microsoft Product Support Services at some point, the information in this file will be useful them.
If you intend to run SQLDiag.exe more than once and would like to save the output from each run, make sure to rename the SQLDiag.txt file (to something unique) after each run. Otherwise, the contents of SQLDiag.txt will be overwritten each time you run the executable file.
For those in a SQL Server 2000 Clustered or virtual SQL Server Environment:
How to run the SQLdiag utility on a clustered instance of SQL Server or on virtual SQL Server
http://support.microsoft.com/kb/233332
"Key"
MCITP: DBA, MCSE, MCTS: SQL 2005, OCP
March 20, 2008 at 9:22 am
I think I've tracked this one down...
When I looked at the server after a lockup and reboot... there wasn't anything to see.
But it appears we're shrinking the Log file down too far when we Backup the Transaction log.
Is there a configuration I can set to specify a minimum Transaction Log File size? The big cause of the problem seems to be:
1. On Sunday, the Log file is truncated via a backup and then shrunk down to about 40MB
2. On Monday in the morning, a huge batch process is run which causes multiple Log file resizes (at least this seems to be the case... 40MB Log file for a 30GB db just seems out of kilter to me), one after the next, which eventually times out the SQL server connection from GP, but leaves the SQL server accessible via EM.
I'd like to give them a 1GB minimum size Log file after the maintenance tasks are done shrinking the heck out of it. I've looked through the jobs and maintenance plans, and it doesn't look like we're running a DBCC SHRINKFILE or anything like that. But my log file has definately shrunk.
March 20, 2008 at 9:47 am
Thanks for the follow up. Glad to hear you've figured it out. Check to make sure that the autoshrink property is not enabled.
March 25, 2008 at 1:02 pm
Unless you don't have the space. Just keep the logfile at 30gb. It is more expensive to keep growing it for the batch job than just keeping it as is. I would also grow the file at 1gb increments. That should keep the logical and physical fragmentation down.
Tom
March 25, 2008 at 1:11 pm
That's a good tip. I'm keeping the log at 10%... so 3 GB.
I had a maintenance plan and auto-shrink enabled.
The maintenance plan I left in place. So on Sunday it will shrink the database. I just threw in an additional step to the plan to resize the log back to 3 GB when it's done. I'd rather have better control over the resize of the log file via the maintenance plan, but such is life.
I turned off auto-shrink... up until I came here, I never created a database with it on... So I just overlooked it when looking for a potential problem until I saw my 3 GB log srhink back to 40MB from the time it took me to walk up one floor and back down, we've got enough space that even if the log baloons up to 30GB it won't use it all... but when you've got test databases on the same box and they're basically backup/restores of the live system, that can be a problem. The Sunday shrink can catch it and size it back down. The large batch processes don't fill up more than about 600-800 MB worth of log, so that's tons of room for the log to fill with the batch and at the same time support any other tasks going on. We dump/truncate the log to disk every 2 hours, so it's not very likely we'll ever see the madly expanding log from any tasks the accountant folks try to do.
Thanks everybody for all the suggestions and ideas.
Viewing 10 posts - 16 through 24 (of 24 total)
You must be logged in to reply to this topic. Login to reply