June 29, 2023 at 1:40 pm
Since I consider myself a learning DBA, have few years of fine tuning experience, I struggle to find to root cause if someone complains about the app slowness. These are some of the tools I use to find out.
I have a VM, 8 cores, 64 GB RAM, 6 DBs. Largest DB is only 4 GB in size.
I am sure there are other built in tools I can use but I was wondering if someone could shed some light on it and see what other tools I can use.
"He who learns for the sake of haughtiness, dies ignorant. He who learns only to talk, rather than to act, dies a hyprocite. He who learns for the mere sake of debating, dies irreligious. He who learns only to accumulate wealth, dies an atheist. And he who learns for the sake of action, dies a mystic."[/i]
June 29, 2023 at 3:08 pm
The most important thing (assuming basics such as enough RAM and CPU, etc. are OK) is likely to analyze missing index data (do not blindly build all indexes SQL suggests). Create new indexes as required (again, after analysis, only the ones you truly need).
Make sure you're not over re-indexing your tables or re-indexing them improperly.
Make sure you keep statistics reasonably up to date for major tables.
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
June 29, 2023 at 4:31 pm
Consider enabling query store and/or setup an extended event to identify long running queries. If the performance is actually related to something in SQL Server - it will almost certainly be the code that is being executed.
When someone is reporting application slowness - you also need to be able to identify the workflow. Is it slow in a specific portion of the application - or just 'overall' slowness? If a specific portion then you may be able to trace the calls that workflow makes to identify the code that needs focused attention.
If just overall slowness then all you can really determine is how SQL Server is performing at that moment in time. If there isn't a long blocking chain - or significant IO, CPU or memory issues - again, at that moment in time - then it most likely isn't the database.
BTW - rebuilding indexes is not something that should be done to improve performance. That is not the goal of index maintenance and any performance improvements seen after rebuilding an index are incidental to the process. In most cases, the index rebuild updated statistics and it was that process that actually improved performance. Index maintenance is more about managing the space and effectively using that space - in memory and storage.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
June 29, 2023 at 7:33 pm
There are lots of opportunities to get in the weeds with this but will try to avoid it.
STOP index maintenance for at least a few days. There is guidance around better ways to do index maintenance, but for the purposes of diagnosing, any index maintenance will reduce your visibility into problem queries.
enable query store as others have suggested, then wait for the metrics to come in, then look at top SQL.
go to brentozar.com and then download the first responder toolkit.
Run sp_BlitzFirst, first. If there are any particularly excessive waits, it will show up there.
Run sp_BlitzCache it will identify the ten cumulatively most costly queries that are in the plan cache since the last time the plan cache was wiped out. (after server restart, index maintenance, several alter database commands, etc) The more plan cache time that you have, the better the data will be that you get back. It will give you a brief summary of the findings and the missing indexes if there are any, the query execution plan if there is one, etc. If you are getting the same query repeated over and over again in the results, you can enable forced parameterization so those show up as equivalent queries. Note that there can be negative performance hits for this, but it can also be positive.
Also in the first responder toolkit there is a procedure sp_BlitzIndex - run that in individual databases with the @databases parameter and look for indexaphobia findings, then evaluate implementing indexes based on the estimated benefit per day.
In all of the above, when you create indexes be wary of creating too many indexes on a table and avoid creating indexes that are excessively wide. I like the 5 indexes per table with 5 index columns rule as a guideline. you can exceed it if you need to, but if you come across a missing index finding that suggests an index with 30 columns in it, it is probably better you don't create it.
Would also do new indexes slowly. indexes can cause performance to decline.
You sort of touched on investigating the dynamic management views - I am not a fan of mucking around in them directly anymore, but if you aren't super familiar with them I would open each of these stored procedures and just navigate around what it is querying just to be aware of what is there so you can get into them for something specific that may not be in a tool that packages it up for you.
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply