October 25, 2007 at 8:23 am
Has anyone here ever worked with the -g startup parameter? I am troubleshooting issues with Red-Gate's SQL Backup program. The backups for the larger databases fail on a regular basis due to a bogus "out of disk space" error. The backups are written to a network share, and it is possible that communication issues and / or memory issues are causing the problems.
One of their forum posts discusses setting the -g startup parameter in order to reserve memory for extended procedure code. I am willing to do this, but I want to do some research about the benefits and drawbacks of using it.
The server in question is a Windows 2003 SP2 machine with 7.5 GB of RAM. Both Physical Address Extension and the /3GB switch are enabled.
The SQL Server instance is SQL Server 2005 SP2 and currently hosts 75 databases. AWE is enabled.
We typically have high page file usage on this server.
Any thoughts?
“If you're not outraged at the media, you haven't been paying attention.”
October 30, 2007 at 1:58 pm
Especially with Large Backup files; is a BAD idea to use a network share 😉
* Noel
October 31, 2007 at 4:13 am
There are a number of issues you should look at.
a) High page file usage. Try to find the cause of this. You should limit the maximum memory SQL can use. On a 7.5 GB box, SQL should normally be capped at about 6.5 GB, otherwise you do not leave enough memory for Windows, SQL, etc to function efficiently. This may be all you need.
b) I completely agree with my friend noeld about potential problems backuing up large databases over the network. It is FAR more reliable to backup to a local disk then copy over the network.
c) Do all of your backups have the same start time? Try staggering the start times so that only (say) 4 backups are running simultaneously. This will reduce pressure on free memory - this was done at my old place and improved backup reliability considerably.
d) If all else fails, add the -g parameter. Check BOL for the current default value, and set -g to a higher value. At my old place we had to put a -g flag in the DW server to give enough memory to for the DTS processes to run reliably. I suggest you increase by 50 MB or 100 MB, and if this does not help do the same again a few times. If you get up to reserving 1 GB with -g then the -g change is not relevant to your problem and you should remove the -g switch.
Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.
When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara
October 31, 2007 at 6:01 am
I don't really have a choice about that, at this time. But I'll certainly keep that in mind as we move forward.
“If you're not outraged at the media, you haven't been paying attention.”
October 31, 2007 at 6:07 am
As far as memory capping, how much should I reserve on a machine with 15.4 GB of memory?
“If you're not outraged at the media, you haven't been paying attention.”
October 31, 2007 at 7:14 am
Start with capping SQL at 2 GB below what Windows sees on machines over 10 GB memory. You may be able to fine tune to get a cap that gives SQL more memory, but the % gain you get diminishes as server memory increases.
If you run things other than the SQL Server engine on a 15 GB box, you need to allow room for those. e.g. If you also run AS then allow AS enough memory to load all its metadata, and reduce the amount of memory SQL gets by the same amount. Likewise if you run SQL FTS then reduce SQL memory by the size of the full-text catalogs.
Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.
When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara
October 31, 2007 at 3:31 pm
Hi Ed,
It's been a long time. Hope you are doing well 😀
* Noel
November 8, 2007 at 7:13 am
I've used the -g command but with sql 2000, my probs with mem to leave were query plans but i've also heard that lite speed can suffer the same.
you might find taking the 3gb switch off may help , there's only so much lower memory available in 32 bit and without wishing to sound like a stuck record the real answer is to move to 64bit and I just don't understand why the take up is so slow. Try 384 first with -g then 512. Be careful you don't have much space to work in - try to limit out of process programs and disable all services that are not needed.
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply