October 23, 2014 at 11:47 am
I am trying to understand the relationship between PLE and paging. One of our servers shows PLE at 2400 right now, and rising. However I am seeing paging going on during this time, at some points up to 120/sec. Can anyone point me to an explanation of this relationship?
Jared
CE - Microsoft
October 23, 2014 at 12:00 pm
What, exactly, do you mean by 'paging'?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 23, 2014 at 12:08 pm
Good question. This is through Idera's SQLdm and was defined as "the number of pages being swapped in and out of memory per second."
I assumed this meant pages of data from sql, but I suppose now that it could be anything on the server. Thoughts?
My end game here is to show a dev team that they don't need more memory, but by seeing paging and not understanding it, I may not be able to convince.
Jared
CE - Microsoft
October 23, 2014 at 12:18 pm
SQLKnowItAll (10/23/2014)
I assumed this meant pages of data from sql, but I suppose now that it could be anything on the server. Thoughts?
Only one. Figure out exactly what it's showing you. There are multiple pages/sec counters in perfmon and several others with similar names, without knowing what that value is, you can't draw any useful conclusions.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 23, 2014 at 12:51 pm
It looks like this is "the dynamic memory currently in use by SQL." I'll be honest, I'm not really sure I understand what that means other than it is SQL memory. I'll dig some more to see exactly what counter they are using. Unfortunately, this is one area I am really lacking in; determining memory "needed" by a specific server/workload.
Jared
CE - Microsoft
October 24, 2014 at 7:11 am
I'm a developer so I'm a little biased, but I'm not sure that "right-sizing" memory is the best approach. Memory is a relatively cheap way to increase performance for a database engine. After all, anything that can live in memory doesn't have to hit disk. Disk access is VERY slow, whether it's reading pages of data to satisfy a query, or sorting a large dataset in TempDB (yes, even when TempDB is on SSDs.)
Now, I'm certainly not an advocate of blindly throwing hardware at a problem (and it sounds like you aren't either.) But maybe a more useful approach would be examining the actual queries being run against the database to see if there's some cheap optimization to be had. Are the developers selecting columns and rows they don't need and filtering in the application? Are they writing cross-joins? Are they writing memory-intensive code when they could be writing more CPU-intensive code (yes, that's usually the alternative)? Are the tables making proper use of indexes to help avoid sorts/scans?
If all is relatively well there, then maybe the answer IS more memory. And remember - memory is also used for temporary datasets (and PLE won't tell you anything there.)
October 24, 2014 at 7:54 am
sqlslacker (10/24/2014)
I'm a developer so I'm a little biased, but I'm not sure that "right-sizing" memory is the best approach. Memory is a relatively cheap way to increase performance for a database engine. After all, anything that can live in memory doesn't have to hit disk. Disk access is VERY slow, whether it's reading pages of data to satisfy a query, or sorting a large dataset in TempDB (yes, even when TempDB is on SSDs.)Now, I'm certainly not an advocate of blindly throwing hardware at a problem (and it sounds like you aren't either.) But maybe a more useful approach would be examining the actual queries being run against the database to see if there's some cheap optimization to be had. Are the developers selecting columns and rows they don't need and filtering in the application? Are they writing cross-joins? Are they writing memory-intensive code when they could be writing more CPU-intensive code (yes, that's usually the alternative)? Are the tables making proper use of indexes to help avoid sorts/scans?
If all is relatively well there, then maybe the answer IS more memory. And remember - memory is also used for temporary datasets (and PLE won't tell you anything there.)
Sure, but if it is not needed, it is not cheap, it is expensive. Memory only increases performance if it is needed. If performance sucks, but all of the data and work have enough memory, then increasing it does nothing. I am currently working with an engineer at Idera to figure out exactly what is being reported here to help understand the relationship.
Jared
CE - Microsoft
October 24, 2014 at 9:39 am
So... Idera described what I was looking at as "It is the sum of Memory\ Pages Input/sec and Memory\ Pages Output/sec. " So, I now know that PLE going up and this counter going up means that there is enough memory. It is not related to the page file. Really it is just an indication of how much data is being moved in and out of memory per second. So, it could be high if stuff is getting paged out, or high if stuff is being read in.
Basically, I would say that they don't need more memory at this point because PLE is continually going up. The "paging" counter is pretty much unrelated to my suggestions for memory.
Jared
CE - Microsoft
October 24, 2014 at 4:54 pm
SQLKnowItAll (10/23/2014)
My end game here is to show a dev team that they don't need more memory, but by seeing paging and not understanding it, I may not be able to convince.
Heh... I'd buy the memory no matter what. It's cheaper than trying to prove that it isn't needed. Once you buy it, then you can tell them that what you told them originally. It's not the box, it's the code.
--Jeff Moden
Change is inevitable... Change for the better is not.
October 24, 2014 at 5:06 pm
Jeff Moden (10/24/2014)
SQLKnowItAll (10/23/2014)
My end game here is to show a dev team that they don't need more memory, but by seeing paging and not understanding it, I may not be able to convince.Heh... I'd buy the memory no matter what. It's cheaper than trying to prove that it isn't needed. Once you buy it, then you can tell them that what you told them originally. It's not the box, it's the code.
Great minds think alike 🙂 is a vm, so I added memory and now they want more cores. Lol
Jared
CE - Microsoft
October 25, 2014 at 5:56 am
SQLKnowItAll (10/24/2014)
"It is the sum of Memory\ Pages Input/sec and Memory\ Pages Output/sec. " [snip] Really it is just an indication of how much data is being moved in and out of memory per second. So, it could be high if stuff is getting paged out, or high if stuff is being read in.
Did you look at the description in perfmon?
Pages Input/sec is the rate at which pages are read from disk to resolve hard page faults. Hard page faults occur when a process refers to a page in virtual memory that is not in its working set or elsewhere in physical memory, and must be retrieved from disk. When a page is faulted, the system tries to read multiple contiguous pages into memory to maximize the benefit of the read operation. Compare the value of Memory\\Pages Input/sec to the value of Memory\\Page Reads/sec to determine the average number of pages read into memory during each read operation.
Pages Output/sec is the rate at which pages are written to disk to free up space in physical memory. Pages are written back to disk only if they are changed in physical memory, so they are likely to hold data, not code. A high rate of pages output might indicate a memory shortage. Windows writes more pages back to disk to free up space when physical memory is in short supply. This counter shows the number of pages, and can be compared to other counts of pages, without conversion.
That said, SQL has its own memory manager, so is unlikely to cause either of those counters to increase.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply