August 28, 2019 at 5:48 pm
We are planing to upgrade SQL Server from 2014 to SQL 2017, So based on current utilization of SQL Server the CPU usage is never exceeded more than 5 -10% and also memory usage was never been an other than the during the maintenance jobs. So i was thinking about trimming the resources with less CPU and memory than what we have. Is there a way to identify how much CPU and memory i would needing based on the DB size. Looking at Page life expectancy, based on my understanding if this value is less than 5 mins then there is a memory need, however this value is high during business hours i.e. 11 -12 hrs.I understand during DB maintenance windows it would need more, which is realistic.Currently it's on physical box so going to virtual with less resources would be not a bad idea right? Please advise?
August 29, 2019 at 5:59 pm
Any input would be appreciated! Thanks in advance!
August 29, 2019 at 9:43 pm
Thanks! Sue, Can you please shed some light like what are the historical information i can look into so that i can come up with some recommendation in terms of memory and CPU need. I have collected the data using monitoring tool and as i said before the resource utilization is not a lot in terms of memory and CPU utilization. Please advise?
August 29, 2019 at 10:52 pm
You would want to look at memory, disk and ram together as changes to one can impact the others. And if you are going to change all of these then there isn't much you can predict with any of them. Issues such as reducing ram can increase disk and CPU usage. Changing from physical to virtual can also have an impact. If you can stand up the VM in development and start testing some of the hardware changes and the impact, run the loads and processes that are the same as what you have today, that's the really the only way to get some idea of what the system can tolerate.
Sue
August 30, 2019 at 5:43 pm
You're awesome. That's really a great input from you. Also, planing to implement Always on with DR(Asynchronous) and Reporting(Synchronous) on the same DB system. Do you see any issues with Always on if we end up going to VM with hardware changes. I am sure i would plan on testing like what you have mentioned above by spinning up a VM in the development. But wanted to check on to see if it would make any difference physical hardware vs VM for Always on AG.
August 30, 2019 at 6:48 pm
Also another thought may be memory is probably covering up for other sins? Since the CPU usage is less than 5% could be the memory is more on the servers, so my take is that, reducing the memory could increase the CPU Usage, again this is just theory or assumptions but i totally agree with sue that it needs to be tested using VM in the development.
August 31, 2019 at 3:28 am
We are planing to upgrade SQL Server from 2014 to SQL 2017, So based on current utilization of SQL Server the CPU usage is never exceeded more than 5 -10% and also memory usage was never been an other than the during the maintenance jobs. So i was thinking about trimming the resources with less CPU and memory than what we have. Is there a way to identify how much CPU and memory i would needing based on the DB size. Looking at Page life expectancy, based on my understanding if this value is less than 5 mins then there is a memory need, however this value is high during business hours i.e. 11 -12 hrs.I understand during DB maintenance windows it would need more, which is realistic.Currently it's on physical box so going to virtual with less resources would be not a bad idea right? Please advise?
It sounds like you have a good history of success. I wouldn't try to trim resources, especially since you cannot predict future changes nor the use of ad hoc queries nor what future index maintenance requirements may be nor what other projects will be made to live on the same server in the future. Enjoy the current excess of hardware because it likely won't be long before you need it.
--Jeff Moden
Change is inevitable... Change for the better is not.
September 3, 2019 at 4:44 pm
Also another thought may be memory is probably covering up for other sins? Since the CPU usage is less than 5% could be the memory is more on the servers, so my take is that, reducing the memory could increase the CPU Usage, again this is just theory or assumptions but i totally agree with sue that it needs to be tested using VM in the development.
Memory absolutely can cover up for other issues - it can end up being like a band-aid for other issues such as poor design/indexing, poorly written queries etc. Get the same specs you have today for the VM you use to test and play around with it from there. It can often be a struggle to get more hardware resources so be careful about giving anything up when you go to production and consider things like the potential for growth, new code rolling out (that may not be so good).
Sue
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply