August 1, 2006 at 2:35 am
Hi Gurus,
Our system is using SqlServer 2005 on XP. The sqlserver memory is configured to 128MB (Min)~512 MB(Max)
After our system get started, sqlserver takes about 100M memory. After the system runs for a while, the memory used by sqlserver goes up to 512MB. Then we exit our application and expect the memory usage of SqlServer could be dropped down. Unfortunately, it seems SqlServer never release any memory until we restart it.
How can we let the sqlServer shrink the memory usage accordingly to workload? I thought it should be done automatically by sqlserver memory manager.
I do appreciate for your help!
Best regards,
Jennifer
August 1, 2006 at 3:15 am
Hi Jennifer,
When using dynamic memory configuration, as you have done, SQL Server is designed to take as much memory as it needs (up to the max limit). So in your case it has taken 512 MB of memory. SQL Server will still keep a hold of it unless another application on the server needs the memory, in which case SQL Server will relinquish some memory for the other application(s) needing it.
In other words, SQL Server will automatically "shrink" the memory according to workload and as long as it isn't using that memory.
Incidentally, 512 MB doesn't sound like a lot of memory .
Regards,
August 1, 2006 at 4:43 am
Hi Karl,
Thanks for the quick answer! But I still don't understand. On my machine, only one application is using sqlServer. After the application is shutdown for a long time, there should be no database access . But why sqlServer can not detect that its workload is really low? And It just holds the maxium memory and doesn't go to the min memory size.BTW,when the SqlServer will relinguish the memory to "another application"? It is decided by OS or SqlServer itself?
In my mind, dynamic memory configuration means SqlServer can release it's useless buffer to OS periodically.However,based on my obervation, SqlServer just hold all the acquired memory.
Best regards,
Jennifer
August 1, 2006 at 6:33 am
Hi Jennifer,
That's just the way it is. SQL Server is greedy in that way .
Basically, an internal SQL Server process (LazyWriter) continuously polls the system to check whether the OS has sufficient memory to service requests from other applications. If the OS has enough memory then SQL Server keeps a hold of its current allocation. This is so that it has better access to the memory for when it needs it. If it were to relinquish its allocation it would have to go throug the process of grabbing it from the OS later on.
If the OS doesn't have enough memory to service application requests then SQL Server will release some memory.
In other words, what's the point in releasing the memory if nothing else needs it.
August 3, 2006 at 12:54 am
Karl is correct. The memory is released by an OS function, not a SQL function. If another Windows app needs the memory currently used by SQL then it'll get released to that process.
If there's enough RAM that non-SQL processes can still function OK, then whatever SQL has taken will stay taken. It's only when the operating system knows that another process needs that memory does it get reduced.
Like Karl says, what's the point in releasing memory if nothing else needs it!
cheers
Danster
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply