June 27, 2008 at 7:25 am
Jeff Moden (6/27/2008)
jjssilva (6/27/2008)
Jeff Moden (6/26/2008)
jjssilva (6/26/2008)
Check Buffer Cache values. Also, how big are the Databases?The DB shouldn't take more than 80% of the available RAM.
So... what do you do when you run against a Tera-byte database??? 😉 Or, do you mean the database shouldn't take more the 80% of the disk space on the system (instead of Ram)?
You tell me, I'm not a pro. 😀
Heh... gave you the chance to fix what you said... 😉
Obviously, a DB can be thousands of times larger than available RAM (memory). RAM and disk space are not the same thing. If it were said that a "DB shouldn't take more than 80% of the available disk space", then I might agree.
I keep waiting for my guys to install that 1 Exabyte RAMdisk I asked for. Just would make all of those nasty delays just melt away...I wonder what the delay is.....:P
----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
June 27, 2008 at 8:19 am
What I ment was, in order to have maximum performance gain from the DB, it should be in memory. Usually around 80% is enough. You can have less memory than the size of the DB, but you might loose performance due to I/O readings.
Am I correct? 🙂
June 27, 2008 at 8:32 am
The usage of memory reported in task manager from sqlserver isn't an accurate representation of the memory currently utilized by SQL Server. When you do not have your max server memory set appropriately (sp_configure 'max server memory') SQL will continue to acquire additional memory on top of what is already allocated as required and not release it until other applications require it. This does not mean that SQL is currently using the memory, but instead it is allocated and available to SQL.
My suggestions ...
1) Give your SQL Service account the ability to lock pages in memory
- This will give SQL the ability to keep data in physical memory, preventing the system from paging to disk
2) Set your max server memory
- This will limit SQL on how much memory it can consume to avoid other applications paging and/or having to release memory from SQL to other applications
- If this is a pure SQL server with only one instance, than leave 2GB to your o/s and allocate the rest to SQL. If there are other applications and/or instances, than set appropriately
3) Set your min server memory
- This will give you a guaranteed set of memory that SQL will not fall below once it is reached
- Again, specific to your needs and environment, scope appropriately
Understanding memory usage in SQL is probably the hardest thing I've come across as a DBA. I am still trying to understand it myself and still attempting to implement proper monitoring strats for it. I am always interested in hearing additional insights on the matter.
June 27, 2008 at 8:37 am
Matt Miller (6/27/2008)
I keep waiting for my guys to install that 1 Exabyte RAMdisk I asked for. Just would make all of those nasty delays just melt away...I wonder what the delay is.....:P
I heard those are no good unless you get the optional coffee dispenser to go with it. :hehe:
--Jeff Moden
Change is inevitable... Change for the better is not.
June 27, 2008 at 8:40 am
jjssilva (6/27/2008)
What I ment was, in order to have maximum performance gain from the DB, it should be in memory. Usually around 80% is enough. You can have less memory than the size of the DB, but you might loose performance due to I/O readings.Am I correct? 🙂
I wouldn't count on it caching the entire database...
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 5 posts - 16 through 19 (of 19 total)
You must be logged in to reply to this topic. Login to reply