May 5, 2009 at 8:26 am
Microsoft strongly recommends that we NEVER set this switch due to SQL 2000 2GB memory limit. How to see how much memory my server is using?What can I clean in my database to get more space?
May 5, 2009 at 8:34 am
Krasavita (5/4/2009)
Thank you so much for your help, how can I get to boot.ini file ?and get access to 3 GB How can I find out if it works only in Advanced Server OS? Thank you again
The boot.ini file is in the root of your C:\ drive. You'll have to set your folder options to show hidden files to see it. Adding the switch will require a re-boot for the change to take effect.
May 5, 2009 at 8:39 am
Krasavita (5/5/2009)
Microsoft strongly recommends that we NEVER set this switch due to SQL 2000 2GB memory limit. How to see how much memory my server is using?What can I clean in my database to get more space?
Where did you see this recommendation? I'm a bit confused, the /3GB switch comes from Microsoft.
As far as the memory goes, the SQL Server Target and Total memory counters that we've discussed tell you how much memory SQL Server has available (Target) and how much SQL Server is currently using (Total).
I don't think there is really any thing you can do to 'clean' up your database that will free up memory space. SQL Server will use as much memory as it has available to it. Most of the memory space is used to switch data pages in and out of for select/insert/update/delete operations. The more memory you have available, the more space it can take advantage of. Not enough memory means that SQL Server will have to go to disk more often and your performance may begin to decline.
May 5, 2009 at 9:29 am
Where did you see this recommendation? My network admin spoke to Microsoft and
we were looking at the SQL server’s boot.ini file to insure that the /3GB switch WAS NOT present. He strongly recommends that we NEVER set this switch due to SQL 2000 2GB memory limit.
Thank you:-)
May 5, 2009 at 9:50 am
I think your Microsoft guy is full of it. Here's the Microsoft documentation that shows how to use the /3GB switch to set up your SQL Server 2000 instance:
http://support.microsoft.com/kb/274750
Microsoft quote "Microsoft Exchange Server 2003 and Microsoft SQL Server 2000 and the /3GB functionality are supported in a production environment."
http://support.microsoft.com/kb/291988/
Its not SQL Server with the 2GB limitation, it is a Windows limitation. The /3GB switch instructs the OS to only use 1GB and frees up 3GB to user applications (then of course, /PAE and AWE allow access to > 4GB but that won't work for you). Once you've got the switch in, you adjust your Max Server Memory setting to 3GB and you'll see SQL Server use that extra 1GB. Again, SQL Server is not limited to 2GB - the OS is limited to what it makes available to the user applications. This is a OS setting, not SQL Server.
May 5, 2009 at 1:35 pm
Thank you for your help, this microsoft guy came and looked at our enviroment and did not recommend it. My company wants to stick with his recomendations. He said that are performance would be really bad if we switch it.
Are you saying I am having memory problems on the server?
You said By default, SQL Server only has access to 2 of the 4 GB of memory, but then you are saying,SQL Server is not limited to 2GB - the OS is limited to what it makes available to the user applications.Can you please explain more to me to understand. Right now I have
SQLServer:Memory Manager Target Server Memory(KB) 167500065536
SQLServer:Memory Manager Total Server Memory (KB) 167500065536
So it is 1.6 G why do I need more? Thank you again
May 5, 2009 at 1:54 pm
OK. Here's a recap. You have 8 GB of memory in your server. You have your SQL instance configured (max server memory) to use 6 GB. You have AWE set to ON.
Here's the problem.
You are running SQL Server 2000 Standard Edition and AWE memory is not supported in that version. AWE (along with /PAE switch) allows for SQL Server to access the memory buffers above 4 GB. The AWE setting tells SQL Server that it can get to the memory above 4 GB and the /PAE switch tells the OS that it can access memory above 4 GB.
But again, you can't do that because of your SQL version. So here's the result. By default, a Windows server (this may have changed w/ Server 2008, but that is out of scope here) has access to 4 GB of memory. 2 GB is reseved for the OS and 2 GB is available for user applications. So your SQL Server 2000 instance (a user application) can only access 2 GB of memory. There is some additional overhead the SQL needs so it subtracts that from the 2 GB available and you'll usuallly see around 1.75 GB of memory available to SQL Server. In your case, you see 1.67 GB.
So no matter what you set your AWE setting or your max server memory setting (above 2 GB) you'll only see the SQL Server Target Memory setting reach 1.67 GB.
So back to your question, do you need more memory. That is why we started down this path and I gave you the performance counters to watch. From your page life expectancy and buffer cache hit ratio values, I would say that more memory will help your application perform better.
Here's your options:
1. Add the /3GB switch so that you're SQL Server Target Memory will increase by 1 GB to roughly 2.67. Remember, increasing memory will improve your database performance. Saying performance will be "reallyl bad" cannot be backed up with a technical reason. The OS does not need 2 GB of memory until you get up to around 16 GB in your server. Test the change and it's effects on your performance. If it's really bad, remove the switch from the boot.ini file and you'll be back to square 1. Yes, your business folks want to listen to the Microsoft guy, but why not test it out and know for yourself what kind of impact it will have on your system. I've seen plenty of Microsoft guys who have been wrong before. If they don't want to do this, get a second opinion from another Microsoft guy.
2. Upgrade your SQL Server version to 2000 Enterprise Edition or higher. I'm not sure if you can even upgrade to that since 2005 and 2008 are out. I think support has been removed for 2000 products. So you'll want to do the research on which SQL Sever 2005/2008 versions allow for memory > 4 GB and use that version. The downside here is the expense in upgrading.
3. Do nothing. You have signs of memory pressure, but if you are not getting complaints from the users and all of the work is getting done on time wait to upgrade until you have a solid business reason to do so.
May 6, 2009 at 10:04 am
Thank you so much, you explained a lot to me.
Question:
You said that I have signs of memory pressure. What would you say, exacly, what sign, my buffer is 92%(taht's a good sighn), what is telling you a bad sign.
My users are using websise and they are complaining sometimes, that sometimes pages sql timeout and I don't know if this is server related. I can's upgrage to 2005 until our websides get upgade to 2007 ecommerce and we are running ecommerce2000.
Thank you again
May 6, 2009 at 10:52 am
Question:
This is cntr_value is 217
object_name = 'SQLServer:Buffer Manager' and
counter_name = 'Page life expectancy' on my server
What is a good number for the server and what is bad number? Thank you
May 6, 2009 at 11:43 am
Your buffer cache hit ratio is a bit on the low side. I would want this to be higher than 95%. This is a cummulative measure from the last service restart. When was the last time your SQL instance was restarted?
Your page life expectancy is definately low. This should not be consistantly below 300. The keyword here is consistantly. If you look at a snapshot of this value point-in-time (like from sysperfinfo) you'll only be seeing real-time values. You'll want to run a Performance Monitor session for 1-2 hours capturing some of your peak times for activity to determine the trends for this counter. Consistant levels below 300 indicate memory pressure.
My gut feeling, based on both of these counters being below optimal levels, is that your system could use more memory, but again, much of that is based on user perception on how the application is performing.
May 6, 2009 at 11:46 am
Also, you may consider running the trace as I suggested but also adding in the counters that I referenced earlier in the thread. In addition to those counters, consider watching SQL Server Buffer Manager: checkpoint Pages/sec.
What I like to do is configure a scheduled trace to run and save the data off to binary files. You'll then be able to use Performance Monitor to look at point in time slices of the file for any time window that you choose. You can also import the file into excel if you wish to graph out the values. I can help w/ looking at the file if you can get one created. It may be a good learning experience for you.
If you want to do this, let me know and I'll get you some info on some other counters that I like to see for a performance overview.
May 6, 2009 at 1:17 pm
Absolutly, I would like to run monitor as long as this not going to impact on my performace on the websides. I would like to get some help from you.Thank you so much
You are saying that my buffer cache hit ratio is a bit on the low side. I just ran from server:
Object: SQLServer:Buffer Manager
Buffer cache hit ratio99.875
Database pages132515.000
Lazy writes/sec5.000
Page life expectancy454.000
Target pages206856.000
Total pages206856.000
So it is higher than 95%. the last time your SQL instance was restarted 2 weeks ago.
When I ran this:select
cntr_value
from
master..sysperfinfo
where
object_name = 'SQLServer:Buffer Manager' and
counter_name = 'Page life expectancy'
I always get different numbers it could be over 500 or around 200. Are you saying this should be never below 300?
May 6, 2009 at 4:06 pm
Hmmm.....your counters have changed quite a bit since the first post where they were lower. I think running a couple hour perfmon will help tell the tale. To answer your question, perfmon will add a bit of overhead on the server, but it should not be enough to notice.
As for the page life expectancy, it can drop below 300, but it should not stay there long. Seeing how this metric trends out in a perfmon log will help a bunch. The value should go up and down, particularly when data is flushed out to make room for more, but it should stay consistently above 300.
I've attached a word doc that I've used many times in the past to send to clients for system performance troubleshooting. It captures a few high level counters that I then use to narrow the problem down to CPU, Disk, or Memory problems. Use the instructions in the file to create and run a perfmon session. I would recommend running it for a 2 hour interval that captures part of your peak time for your application's usage. Once done, post the results file as an attachment and I'll go through it and present my findings to you so that you can see where the counters play into performance.
It would also help if you asked someone to track user complaints of slowness while you are running the trace. Having a set point in time to look for a problem sometimes helps with looking at the data.
May 11, 2009 at 6:47 am
I have the file start monitor for 5 days, how can I disscussed it with you?Thank you
May 11, 2009 at 9:43 am
So is the output broken down into multiple files? I would say pick one and add it to this thread as an attachment. I can then look at it and we'll catch up in the next day or so to discuss the findings. I'll not be able to look at the file until I this evening at the earliest but it won't take me long to look through it.
Viewing 15 posts - 16 through 30 (of 42 total)
You must be logged in to reply to this topic. Login to reply