March 2, 2023 at 8:44 am
Hi All,
This is little off topic and going against proper performance troubleshooting. My question is, assuming I have a sub-prod environment be at DEV, QA, or any UAT Server, where we don't have same hardware in terms of Memory and CPU's.
Now lets say, the prod has 16 CPUs and 512 GB RAM dedicated for prod database but the sub-prod has very less configuration with 8 CPU's and 256 GB RAM. Also, on sub-prod environments are mostly shared server and atleast they host 40-50 databases on the same SQL Instances. And when multiple application users are performing UAT testing on their respective databases, we see some slowness. Queries competing for resources. Even if the sub-prod env has dedicated db server , sometimes SQL Server restart will help in resolving the performance issue. Why?
And another question is, if we have to restart sql instance (lets say, standalone sub-prod, users are doing UAT testing and experiencing slowness in the workload,no blocking but its a shared instance and the team is okay for sql restart) then in that case, which of the following restart methods is better.
Assuming that they are okay to run maintenance as well.
Option #1::
Rebuild the indexes
update the statistics on all tables with FULLSCAN
restart SQL Server
Option #2:
restart SQL Server
Rebuild indexes
update the statistics on all tables with FULLSCAN
Again, I am saying this is not the ideal method we should be doing instead of checking whats currently running on the server, check for blocking, check for waits etc..... but there is limited time and users are okay for sql restart then which of the above options is better than the other.
Thanks,
Sam
March 2, 2023 at 2:57 pm
Restarting SQL Server does a whole ton of things. First, you're flushing memory. So queries coming along after the reboot have a clear memory run to load up just their stuff there as opposed to flushing out what's there to replace it, that's generally going to see a short term performance enhancement. Another thing is, again, memory is cleared, so all plans have to be recompiled. Some plans may see benefits from the recompile because statistics have been updated and more accurate row counts are available. You could see this by simply flushing a given plan from the plan cache, or, if you like nuking things, flushing the entire plan cache.
Those are just a couple of points. There are more. But I sure would not, WOULD NOT, rely on reboots as some sort of mechanism to improve performance overall. The improvements you do see are the cause of something other than the reboot (recompile of plans for example) or are only very temporary enhancements (my query has a free run at memory with no contention for one execution of the query). It's not a good practice, at all.
As to restarting SQL Server... what does rebuilding indexes have to do with it? That's an entirely separate issue in every possible way. I simply can't suggest to you which is better.
HOWEVER!!!
If you rebuild indexes, you get all new statistics. Therefore, rebuilding indexes and then running statistics updates, that can actually cause your stats to change. See, they did a full scan when the index was rebuilt. Running it again, doesn't help. The exception to this is statistics that are not a part of an index. You could look for statistics separate from indexes and update those after an index rebuild and you may see benefits, depending.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
March 2, 2023 at 6:19 pm
Have you simply tried clearing proc cache for the database that has the code that's causing you fits, which would cause a recompile of your all the code for that database the next time it's used?
DECLARE @DBID INT = DB_ID('PutTheDabaseNameHere');
DBCC FLUSHPROCINDB (@DBID);
;
The other thing to consider is that the system (DEV, QA, or any UAT Server, whatever...) is probably telling you of larger issues when your stuff goes to prod. You might want to look into the actual code that causes such issues Even good hardware has its limits and, considering the sizes of your database, could run into such scalability issues a lot more quickly than you might think.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply