June 3, 2010 at 9:30 am
Hi all,
Problem: SQLServr.exe is consuming high memory(SQL Server 2005 SP3 Ent Edition x64)
Memory: 16 GB of RAM
OS Windows Server 2003 R2 64-bit
Databases hosted: MOSS databases only
but I have a serious problem which is SQL server consumes 15.3 GB or RAM all the time and I didn't set (Max Memory Value) any one have any clue\advise regarding that issue
Thanks In advance, appricaite your support
June 3, 2010 at 12:40 pm
Set "max server memory" to a specific value that leaves some memory to the OS and other processes running on the server. I usually leave 2 GB on a 16 GB system. That should be more than enough on a system that only runs SQL Server.
June 4, 2010 at 2:11 am
Nils Gustav Stråbø (6/3/2010)
Set "max server memory" to a specific value that leaves some memory to the OS and other processes running on the server. I usually leave 2 GB on a 16 GB system. That should be more than enough on a system that only runs SQL Server.
i dont think Sql server 2005 requires this kind of static memory settings.
-------Bhuvnesh----------
I work only to learn Sql Server...though my company pays me for getting their stuff done;-)
June 4, 2010 at 4:45 am
Rather than just capping the amount of RAM SQL Server can utilise I think you should be looking into what is the actual cause of the problem.
Have you gathered any SQL Server and OS level statistics using the DMVs or Perfmon to see what is actually going on "under the hood"?
Chris
www.sqlAssociates.co.uk
June 4, 2010 at 4:55 am
Bhuvnesh (6/4/2010)
Nils Gustav Stråbø (6/3/2010)
Set "max server memory" to a specific value that leaves some memory to the OS and other processes running on the server. I usually leave 2 GB on a 16 GB system. That should be more than enough on a system that only runs SQL Server.i dont think Sql server 2005 requires this kind of static memory settings.
It does.
Setting max memory is a recommended practice on a 64-bit edition of SQL Server.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
June 6, 2010 at 2:08 am
Thanks Guys,
so far I have 3 diffrent approch based on above replies
1--SQL Server Doesn't need such setting "Mx Memory"
2--Recommended to Set "Mx Memory Value" specially on 64x Edition
3--Rather than just capping the amount of RAM SQL Server can utilise I think you should be looking into what is the actual cause of the problem
I am so Lost 🙂 please advise
June 6, 2010 at 2:56 am
You haven't told use if you are experiencing any problems, like slow running queries etc. High memory usage is not an indication that there is a problem, but if you are running into high I/O utilization then you might have a problem with memory pressure. Reasons for high I/O could be caused by many things; index/table scans causing large reads into memory (causing existing pages in buffer cache to be removed/written back to disk), tempdb (sort/hash spills, row version store, temporary tables and table variables, LOB variables etc)
The fact that SQL Server uses all the RAM available to it is not unusual.
Buffer Cache (data pages) and cached execution plans are the two things that usually take up most of the memory, but there are other cache types that use memory too?
Run the following queries to get an overview of how memory used by SQL Server is distributed between buffer cache (data pages) and procedure cache. You'll also get a list of how much memory each database is using for storing data pages.
/*
Target memory is the total amount of memory that SQL Server can use.
Total memory is the memory that is actually committed from the buffer pool.
Page life expectancy is number of seconds that a page is expected to stay in cache.
Should be more than 5 minutes (300 seconds).
*/
select object_name,counter_name,instance_name,cntr_value from sys.dm_os_performance_counters
where (object_name='SQLServer:Buffer Manager' and counter_name='Page life expectancy')
or (object_name='SQLServer:Memory Manager' and counter_name='Target Server Memory (KB)')
or (object_name='SQLServer:Memory Manager' and counter_name='Total Server Memory (KB)')
/*
Buffer cache hit ratio in percent
*/
select convert(decimal(4,2),100.*c.cntr_value/b.cntr_value) as [buffer_cache_hit_ratio]
from
(select cntr_value from sys.dm_os_performance_counters where object_name='SQLServer:Buffer Manager' and counter_name='Buffer cache hit ratio') c
cross join (select cntr_value from sys.dm_os_performance_counters where object_name='SQLServer:Buffer Manager' and counter_name='Buffer cache hit ratio base') b
/*
Displays how much memory that is used by the procedure cache and hos much is used for storing
data pages.
*/
;with proc_cache as
(select sum(single_pages_kb+multi_pages_kb)/1024 as procedure_cache_mb from sys.dm_os_memory_cache_counters
where type in ('CACHESTORE_OBJCP','CACHESTORE_SQLCP','CACHESTORE_PHDR','CACHESTORE_XPROC')
), buffer_cache as
(select count(*)*convert(bigint,8192)/1024/1024 used_buffer_cache_mb
from sys.dm_os_buffer_descriptors)
select p.procedure_cache_mb,b.used_buffer_cache_mb,p.procedure_cache_mb+b.used_buffer_cache_mb as total_cache_used_mb from proc_cache p,buffer_cache b
/*
Lists the amount of buffer cache that is used by data pages by each database.
*/
select case when database_id=32767 then 'mssqlsystemresource' else db_name(database_id) end as database_name,count(*)*convert(bigint,8192)/1024/1024 used_buffer_cache_mb
from sys.dm_os_buffer_descriptors
group by database_id with rollup
June 6, 2010 at 3:15 am
sqlguy-549681 (6/6/2010)
2--Recommended to Set "Mx Memory Value" specially on 64x Edition
Once you've prevented SQL from starving the OS (which is is very capability of doing), then you can go and investigate why it's using so much memory (if you want, large memory usage is perfectly normal with SQL) and take any necessary steps
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
June 6, 2010 at 5:12 am
Nils Gustav Stråbø (6/6/2010)
You haven't told use if you are experiencing any problems, like slow running queries etc. High memory usage is not an indication that there is a problem, but if you are running into high I/O utilization then you might have a problem with memory pressure. Reasons for high I/O could be caused by many things; index/table scans causing large reads into memory (causing existing pages in buffer cache to be removed/written back to disk), tempdb (sort/hash spills, row version store, temporary tables and table variables, LOB variables etc)The fact that SQL Server uses all the RAM available to it is not unusual.
Buffer Cache (data pages) and cached execution plans are the two things that usually take up most of the memory, but there are other cache types that use memory too?
First of all , thanks for your valuable reply, here is the findings
I was having a problem with Performanc (Slow Performance) and based on the findings we upgraded RAM from 6 GB to 16 GB , the problem didn't occur again but we MUST be 100% suer that problem will not happen again since we in the test phase and that application(MOSS Portal,SQL Server hosting only MOSS DBs) should be release and users will be around 200 user , so we afraid that problem occur again .
can you pls explain how can I read the values i will get from the script
Thanks In advance
June 6, 2010 at 6:11 am
The first thing you should do is set "max server memory" to prevent the OS from running out of memory.
Also, try to avoid setting min and max to the same value. If the OS needs memory then SQL Server will not trim its working set, and the result could be that parts of the memory used by SQL Server being paged out to swap files, with the result being a sudden performance drop. These events are logged to the error log.
http://support.microsoft.com/kb/918483
To prevent decreased performance over time the only good advice I can give you is to monitor and store historical data of performance counters, like I/O requests/waits, misc memory counters, CPU, requests queues, wait stats etc. Some of these values can be retrieved using perfmon and some are found in DMV/DMFs.
We are using a combination of Quest Spotlight for SQL Server and Confio Ignite to get a good picture of what is happening and what has happened on out most critical systems.
June 6, 2010 at 6:17 am
sqlguy-549681 (6/6/2010)
I was having a problem with Performanc (Slow Performance) and based on the findings we upgraded RAM from 6 GB to 16 GB
Another thing to keep in mind; Just one single query doing table/clustered index scan on a large table could be enough to slow down you system, if executed frequently enough. Check index stats and plan cache DMVs for this kind of "non-optimal" behavior.
June 6, 2010 at 6:21 am
Hi,
I have found smth here maybe related I found the page value has been set to "0", what do you think and what is the right value I should use.
I have to assign the 'lock pages in memory' right to the service account running SQL?
I'm running SQL 2005 Enterprise Edition on Windows Server 2003 x64 OS. what do you think
here are the query Result
Thanks In advance
June 6, 2010 at 7:09 am
What "page value" is set to 0?
If SQL Server is the only process running on the server, or if "min server memory" is less (a few GB) than "max server memory", then I don't see how "Lock Pages In Memory" will help.
there won't be any page out to swap file if SQL Server is able to respond to "low memory" notifications from the OS (min memory<max memory).
The result of the query tells you that a page is expected to live for over 3000 seconds. This is good. A "best practice" is that this value should be at least 300 seconds. The other values tells you that SQL Server has taken possession of 15,5GB om memory and is currently using 9GB of them for the buffer pool.
What is the output of the other queries? One of them will tell you how much is used by the buffer cache (data pages) and how much is used for procedure cache (execution plans). There are other memory types too, but they are usually insignificant in size compared to buffer and procedure cache.
The last query will show you how much of the buffer cache is used by each database. You can also query how much of the buffer cache is used by individual tables/indexes/partitions, and compare that value to the total size of the corresponding object. If the (almost) enitre content of the object is in cache, then there is a great chance that this has been caused by a scan operator in a execution plan. If this is a fast growing table/index/partition, then you should be on alert because this could lead to memory pressure at a later time.
June 6, 2010 at 7:12 am
I meant the paging file is set to 0
have appended the other result above
thanks in advakce
June 6, 2010 at 7:34 am
The numbers look very good. SQL Server has even got unused memory that can be used for caching. That never happens on my systems 🙂
Just keep in mind that if total memory starts to get close to target memory, it doesn't mean that your system will perform poorly. But when it happens you should keep a look at the I/O counts and waits, page life expectancy and buffer cache hit ratio.
You can also monitor I/PO activity for each database file by executing to query below. I guess you will see very little activity, except for the log files which is perfectly normal since CUD operations has to be logged.
select db_name(database_id) as database_name,file_id,size_on_disk_bytes/1024/1024 as size_on_disk_mb
,num_of_bytes_read/1024/1024 as total_read_mb
,io_stall_read_ms/case when num_of_reads=0 then null else num_of_reads end as stall_ms_per_read
,num_of_bytes_written/1024/1024 as total_writed_mb
,io_stall_write_ms/case when num_of_writes=0 then null else num_of_writes end as stall_ms_per_write
from sys.dm_io_virtual_file_stats(null,null)
Viewing 15 posts - 1 through 15 (of 32 total)
You must be logged in to reply to this topic. Login to reply