October 5, 2011 at 3:20 pm
jared-709193 (10/5/2011)
Yes, but like you said. There is a reason to keep it enabled. I have not found a good reason to disable it.EDIT: Also, I respectfully disagree. I know the subject has been beaten to death, and people do what works for them. However, I have found some good explanations about how the page file is used and how having it disabled or too small can cause issues. I believe that the page file will always be used when enabled no matter what your memory size.
http://stackoverflow.com/questions/2588/appropriate-pagefile-size-for-sql-server
Thanks,
Jared
Doesn't hurt for it to be there it just that you don't want any paging going on during normal operation. That is an indication there is insufficient memory for resident processes.
Sounds like you do a lot of bulk loading, you might consider a separate staging database on a different drive, or perhaps even a table on file group on a separate drive array.
Other than that there is really nothing all that remarkable about the size of your database. Memory always can be utilized in many ways so I would go for that.
The probability of survival is inversely proportional to the angle of arrival.
October 5, 2011 at 3:23 pm
jared-709193 (10/5/2011)
I believe that the page file will always be used when enabled no matter what your memory size.
Yes, it will always be used because Windows will be pro-actively writing memory to the page file just in case it need those memory pages free. However 'always be used' != 'always necessary'
That 1.5*RAM is not a great recommendation for larger servers. The only time you absolutely must have a page file larger than physical RAM is if you're doing a full memory dump. That's very occasionally necessary to debug strange crashes, but most of the time a crash writes out a kernel dump, not a complete memory dump. (http://blogs.technet.com/b/askperf/archive/2008/01/08/understanding-crash-dump-files.aspx)
If the server is dedicated to SQL and the memory configuration for SQL has been sensibly done and tuned and monitored, there's little need for a swap file. If SQL is just left with default memory settings and all sorts of other things are done on the box, Windows will be frequently needing to swap stuff out to the page file.
http://blogs.technet.com/b/askperf/archive/2007/12/14/what-is-the-page-file-for-anyway.aspx
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 5, 2011 at 3:25 pm
mendesm (10/5/2011)
I still have not found convincing arguments as to why anyone would run a sql server in a way that swap/page file use would be of any benefit, and by that I mean actually used, not just configured to be available. As far as my sql instances are concerned, either they can work with the memory they're given or they require more RAM installed. The only time I want my SQL data in disk is when it is committed there when it's written or just before it is fetched up to be used by a query or another.
Yet, I gave you a link to an article that describes exactly why it can be beneficial to your system. 🙂 Mind you, I am working constantly at becoming an expert at SQL Server, not Windows Memory Management and don't want to become too involved. Increasing performance by moving it to a separate partition is one thing, but I have no reason (especially with how cheap space is these days) to disable it.
Jared
Jared
CE - Microsoft
October 5, 2011 at 3:31 pm
jared-709193 (10/5/2011)
mendesm (10/5/2011)
I still have not found convincing arguments as to why anyone would run a sql server in a way that swap/page file use would be of any benefit, and by that I mean actually used, not just configured to be available. As far as my sql instances are concerned, either they can work with the memory they're given or they require more RAM installed. The only time I want my SQL data in disk is when it is committed there when it's written or just before it is fetched up to be used by a query or another.Yet, I gave you a link to an article that describes exactly why it can be beneficial to your system. 🙂 Mind you, I am working constantly at becoming an expert at SQL Server, not Windows Memory Management and don't want to become too involved. Increasing performance by moving it to a separate partition is one thing, but I have no reason (especially with how cheap space is these days) to disable it.
Jared
Be careful saying different partition is helpful as you can have multiple partitions on same disk (i.e. when you get a laptop from Dell and they put C:\ and D:\ on one disk). You actually want to keep page file away from same disks as your SQL data/log/tempdb files to avoid IO contention.
In regards to the 1.5 TB page file on SQL Server, Gail has already pointed out that that approach is pretty much unecessary unless CSS team is asking for full memory dump. The recommendation for 1.5*physical memory was fine when largest systems around only had 20-30GB but now we're seeing memory in the TB ranges. Tell your SAN admin you need 2TB just for a page file (that technically speaking shouldn't be utilized unless you're starving system of physical memory) and see if they don't choke you Vader-style.
=============================================================
/* Backups are worthless, Restores are priceless */
Get your learn on at SQL University!
Follow me on Twitter | Connect on LinkedIn
My blog: http://sqlchicken.com
My book: Pro Server 2008 Policy-Based Management
October 5, 2011 at 3:42 pm
For anyone who wants a deep (and I do mean Deep) dive into Windows memory internals from one of the Windows Kernel team, see these videos.
http://channel9.msdn.com/Events/TechEd/NorthAmerica/2011/WCL405
http://channel9.msdn.com/Events/TechEd/NorthAmerica/2011/WCL406
They probably need watching 3 or more times before it all starts making sense.
In massive summary, the Commit Charge is the total amount of memory that can be granted to applications (by which I include the OS itself), and is equal to physical memory + swap file. Windows will proactivly write changed memory pages to the swap file so that, if some other app asks for memory and there are no free pages, it can quickly move some pages from one application to another, because that application that lost memory pages has it's data preserved in the swap file.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 5, 2011 at 3:49 pm
Sqlchicken (10/5/2011)
jared-709193 (10/5/2011)
mendesm (10/5/2011)
I still have not found convincing arguments as to why anyone would run a sql server in a way that swap/page file use would be of any benefit, and by that I mean actually used, not just configured to be available. As far as my sql instances are concerned, either they can work with the memory they're given or they require more RAM installed. The only time I want my SQL data in disk is when it is committed there when it's written or just before it is fetched up to be used by a query or another.Yet, I gave you a link to an article that describes exactly why it can be beneficial to your system. 🙂 Mind you, I am working constantly at becoming an expert at SQL Server, not Windows Memory Management and don't want to become too involved. Increasing performance by moving it to a separate partition is one thing, but I have no reason (especially with how cheap space is these days) to disable it.
Jared
Be careful saying different partition is helpful as you can have multiple partitions on same disk (i.e. when you get a laptop from Dell and they put C:\ and D:\ on one disk). You actually want to keep page file away from same disks as your SQL data/log/tempdb files to avoid IO contention.
In regards to the 1.5 TB page file on SQL Server, Gail has already pointed out that that approach is pretty much unecessary unless CSS team is asking for full memory dump. The recommendation for 1.5*physical memory was fine when largest systems around only had 20-30GB but now we're seeing memory in the TB ranges. Tell your SAN admin you need 2TB just for a page file (that technically speaking shouldn't be utilized unless you're starving system of physical memory) and see if they don't choke you Vader-style.
Yes, sorry. I should have said separate physical disk. I understand the concerns with TBs of memory. However, in "most" real-life situations (correct me if I am wrong) I see this much memory as overkill. Sure, there are some intense database servers out there that need this, but it is my impression that "most" of the users of MSSQL don't need more than 32GB to successfully run their platforms without exceeding physical memory. As far as 1.5*physical memory, it seems this is a base and that it is important to monitor different counters to really know what it should be set at. I just don't see how completely disabling it can help more than it can hurt. I'm not trying to argue it, it's just that nobody can point me to a definitive article on the benefits of disabling it vs configuring it properly, but I can find several about the bad things that "could" occur when it is disabled.
Thanks,
Jared
Jared
CE - Microsoft
October 5, 2011 at 3:50 pm
GilaMonster (10/5/2011)
For anyone who wants a deep (and I do mean Deep) dive into Windows memory internals from one of the Windows Kernel team, see these videos.http://channel9.msdn.com/Events/TechEd/NorthAmerica/2011/WCL405
http://channel9.msdn.com/Events/TechEd/NorthAmerica/2011/WCL406
They probably need watching 3 or more times before it all starts making sense.
In massive summary, the Commit Charge is the total amount of memory that can be granted to applications (by which I include the OS itself), and is equal to physical memory + swap file. Windows will proactivly write changed memory pages to the swap file so that, if some other app asks for memory and there are no free pages, it can quickly move some pages from one application to another, because that application that lost memory pages has it's data preserved in the swap file.
AWESOME!!! Thank you!
Jared
Jared
CE - Microsoft
October 5, 2011 at 4:35 pm
If your page file is large enough to support a full dump, it will perform a full dump every time. If you have a very large amount of RAM, this can cause a tiny blip to became a major outage.
You do NOT want your server to have to write out 1 TB of RAM to disk if there is a one-time transient issue. If there is a recurring issue, you can increase the page file to capture a full dump. I would wait to do this until you have been isntructed by PSS (or someone else qualified to analyze a full dump) request you to capture a full dump. An extremely small percentage of DBAs know how to analyze a full dump. A mini-dump is sufficent for troubleshooting most issues that pop up anyway.
Plus, if your server is configured to allow a 1 TB full dump and a recurring issue occurs, how much free disk space would you recommend having on hand? You could fill up an entire SAN in a single weekend.
A page file 1.5*RAM was the norm back in the days when you were lucky to have a SQL Server with 3 or 4 GB of RAM. This is not the case any more. I leave the page file at Windows default size and settings on all production servers (except for an SSAS server that is experiencing memory pressure).
And just for clarification, I've worked with servers ranging from 2 GB of RAM to 2 TB of RAM. After more than 11 years, I have only had to increae the paging file to capture a full dump one time
October 5, 2011 at 5:00 pm
isuckatsql (10/5/2011)
I will be using:Raid1 - Program Files (2 HD)
Raid10 - Data files (10 HD)
Raid1 - Log Files (2 HD)
Raid0 - TempDB files 2(HD)
Sure you want to use that? Using RAID 0 for TempDB means that if there's a drive failure in that array SQL is down until you can replace the drive. That could be a fair amount of time.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 5, 2011 at 5:25 pm
jared-709193 (10/5/2011)
mendesm (10/5/2011)
I still have not found convincing arguments as to why anyone would run a sql server in a way that swap/page file use would be of any benefit, and by that I mean actually used, not just configured to be available. As far as my sql instances are concerned, either they can work with the memory they're given or they require more RAM installed. The only time I want my SQL data in disk is when it is committed there when it's written or just before it is fetched up to be used by a query or another.<snip>Yet, I gave you a link to an article that describes exactly why it can be beneficial to your system. 🙂 <unsnip>
Jared
To "A" system, sure, not to my SQL Servers. Nothing runs on my SQL servers other than the bare minimum OS and SQL Server components. If other things are running on a SQL Server that may be another story. However, if you're going to buy a machine of the size and capabilities of the ones mentioned in this thread thus far because your SQL instance needs it that way, why would anyone want anything other than SQL Server running on them? A small shop with a small budget with smaller servers which may host SQL along side with non-SQL stuff is a completely different story.
Again, I have yet to find a convincing argument for using a page file with regards to just SQL Server.
October 5, 2011 at 5:27 pm
... and see if they don't choke you Vader-style.
I'm going to have to use that in the near future 😀
October 5, 2011 at 5:33 pm
GilaMonster (10/5/2011)
isuckatsql (10/5/2011)
I will be using:Raid1 - Program Files (2 HD)
Raid10 - Data files (10 HD)
Raid1 - Log Files (2 HD)
Raid0 - TempDB files 2(HD)
Sure you want to use that? Using RAID 0 for TempDB means that if there's a drive failure in that array SQL is down until you can replace the drive. That could be a fair amount of time.
Exactly. When I tell people I run tempdb on raid 10 I often feel I should help them pop their eye balls back into their skull. People often learn that tempdb is not important but they don't realize what that trully means.
October 5, 2011 at 7:10 pm
Dell had some white papers written that recommended that specific Raid configuration.
Raid0 for TempDB was suggested due to its speed.
I understand what you are saying, but i'm sure they considered this. It does seem a bit weird!
October 5, 2011 at 9:01 pm
Forget what Dell recommends. What's important is what your environment needs
If you have 5 sql servers in your environment and if one of them goes down because of the tempdb drive died but the others go on and your business can operate with that one instance down, then raid 0 or even a single local disk will be ok most of the time.
But consolidate 23 separate instances into a 2 node active-active sql server and chances are neither of those instances can go down for an entire day because the tempdb drive failed.
Even if you have just one SQL Server. If it is the system that runs all of your invoicing, you probably can't afford risking the tempdb drive dying on you during nightly processing on closing day. Then again, maybe you can live with it if it were to happen on a friday night and you don't have to have all those nightly jobs done until Monday morning. But I don't like working on weekends anymore than hardware stores vendors like to stay open on Sunday's past 6pm just because my tempdb drive may have to be replaced. How about always having a couple of drives ready to go on a shelf somewhere just in case the store is already closed? 😀
October 6, 2011 at 2:38 am
isuckatsql (10/5/2011)
Dell had some white papers written that recommended that specific Raid configuration.Raid0 for TempDB was suggested due to its speed.
That's nice. Is it Dell's problem or your problem if your SQL server is down for a week while you get a replacement drive?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 15 posts - 16 through 30 (of 32 total)
You must be logged in to reply to this topic. Login to reply