November 4, 2015 at 4:12 pm
HI All
I ran SQL profiler from server side ( Tool -> SQL Profiler ) in Production
It took 4 hours ( during office hours ) . I ran this because I got user complaint about slow application and wondering what is the cause
After 4 hours running this I got bad news that the application is freezing
Will SQL profiler be the part of this ?
Many thanks for the feedback !!
November 4, 2015 at 4:21 pm
Hi Guys
A bit update ..
I ran Profiler from client side ( Tool – Profiler )
November 4, 2015 at 4:32 pm
Yes, your use of profiler GUI was the culprit.
NEVER use the Profiler GUI against a production server. It will degrade performance, potentially severely, it can crash the server. Use a server-side trace with the fewest possible events and columns and write the trace to a fast, local drive.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 4, 2015 at 5:08 pm
Thanks for the reply Gail . Much Appreciate it .
Hmm My team just checked the graph .. It was spiking from 4 PM - until 4 :17 PM
and I ran Profiler from 11:30 AM until 4 :30 PM
The interesting thing is the graph looked alright from 11:30 am to 4 pm .
But looked so bad from 4 PM – 4:17 PM . If it is due to Profiler , the graph will look bad during 11:30 AM until 4:30 PM .
Any comment about that situation ?
Many thanks
November 4, 2015 at 5:24 pm
Profiler takes latches to sync the display with the server activity. It'll have minimal impact on a server that's got a light workload. As the workload increases, so Profiler's impact gets worse and worse and worse. So the more activity your server saw, the worse the Profiler impact would have been. So, no, it would not have looked bad the entire time.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 4, 2015 at 10:35 pm
OK . Thanks for the reply Gail !
November 5, 2015 at 8:51 pm
Btw I have tuned the highest Query so far and the logical read drop to 5 from 900.000 , But user still complains about slowness .
Would it be due to resource issue such as memory ?
Physical memory is 48 and Max memory had been set to 41 Gb since a long long time ago
Cheers
November 5, 2015 at 9:28 pm
So keep tuning resource-intensive queries until they stop complaining.
It's often not the query with the highest reads per execution, but the one that uses the most resources in a unit of time.
Which is worse, a query which runs once an hour and does 1 million reads or one which runs every second and does 10 000 reads?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 5, 2015 at 9:42 pm
i think this is even worse
a query which runs once an hour and does 1 million reads
Am I right ?
November 5, 2015 at 10:17 pm
Is that once an hour query going to affect performance for the part of the hour it's not running?
Let's say the once an hour query runs for 5 minutes. Does it affect performance for the other 55 minutes of the hour?
Is the query that runs every second going to affect the server for just part of an hour or all of it?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 7, 2015 at 8:46 pm
There are plenty of ways to collect perf data, with extended events being the future. But I am still using the old ways, at least for now. If on a tight budget, on codeplex.com there is a free utility named pssdiag configuration manager. If you have a budget, consider this website's sponsor :-D. Use pssdiag config manager to generate a self-extracting executable (for your SQL Server version). For now, stick with config manager's general performance template and do not choose any other checkbox options. Click Save and copy the self-extracting executable to a folder on a disk that your production sqlservr can see. Preferably choose a fast disk, preferably one with tens of GB of free space, and definitely not a disk used for database file or log file IO. I have a 100 GB SSD reserved for pssdiag's use. Do not use an external USB 2.0 disk (for example). Do not use a network share. Extract the config manager executable's contents into the chosen disk's folder.
On the sqlservr box, open cmd.exe, CD to that folder, and run pssdiag.cmd. The console will tell you when pssdiag is "ready to repro", but if perf is bad all the time, you must be ready to repro at any time :). While pssdiag is running, sqlservr puts data in a subfolder named OUTPUT. Use file manager (explorer) to monitor the files being placed in that folder, sorted by file size descending. The trace files (which are server-side trace files) will be 350 MB each. How many trace files are generated per second or per minute depends upon what you configured in config manager, and how many executions per second or per minute your sqlservr must perform. If 20 minutes is reached, or if disk space looks like it is going to be exhausted, make the cmd.exe window your focus, and hit <Ctrl>+C once. Wait for pssdiag to shut down (on a system that is out of CPU cycles, a shutdown can last low tens of minutes).
Once pssdiag's shutdown completes, zip the OUTPUT folder and copy that zip file to a non-Production sqlservr, such as your desktop or some other SQL Server that does not mind experiencing what follows (which is a resource-intensive operation). On codeplex.com there is another utility named SQL Nexus. Follow its setup instructions on the non-Production SQL Server, and import the unzipped OUTPUT. The import is resource intensive. When the import is done, first drill down to the Bottleneck Analysis. It shows you an aggregation of important wait_types (from dm_os_wait_stats) over time. I typically pay attention to the table, not the bar chart. You can click on each bar, and just its wait_type will be charted over time. Once you have a basic understanding of your system's salient performance bottlenecks, go back to the main page and drill down on the readtrace reports. Because a general performance template was used, only batch level (not statement level) events were collected. If the salient performance bottleneck was disk IO (pageiolatch) sort the list of statements by logical reads descending. If the salient performance bottleneck was CPU, sort the list by CPU time descending. If the salient performance bottleneck was log file writes (WRITELOG), sort the list by writes descending. The Bottleneck Analysis page will also reveal when trace locks become a problem :). Go back and click on SQL Nexus' other links, such as Interesting Events and Blocking. In the OUTPUT folder you will also find a perfmon log (*.blg) and some %perf_stats%.out (text) files that are worth inspecting.
The database created by SQL Nexus has far more information than the SQL Nexus' GUI, and its table's or views can be queried ad-hoc. Later, you might want to run sqldiag/? to see all possible pssdiag arguments. Some arguments are useful, such as /E +00:20:00 (which stops pssdiag collection after 20 minutes). I usually attempt to collect data for 20 minutes, give or take a couple of minutes. That way I get a feel for what was accumulated or happening during fixed periods of time.
If you feel comfortable with config manager's general performance template, you can increase the rate of data collection by instead choosing its detailed performance template. Or, what I do is add the Performance\Showplan Statistics Profile, Stored Procedures\SP:StmtStarting, and Stored Procedures\SP:StmtCompleted events to the General Performance template. With those events added, SQL Nexus will allow you to drill down to the statement level. Otherwise be careful when adding events. For example, Locks\LockAcquired and Locks\LockReleased are far too chatty and not very useful. As another example, clicking on every Showplan event will cause the server-side trace's rate of data collection to sky rocket (Showplans are "large' events), and yet they will not offer you much more information that just Showplan Statistics Profile (which is the only Showplan event that SQL Nexus consumes).
November 10, 2015 at 5:02 pm
You can think to run extended events. Much lighter compared to Profiler.
Thanks.
November 11, 2015 at 12:47 am
SQL-DBA-01 (11/10/2015)
You can think to run extended events. Much lighter compared to Profiler.
SQL 2008 had a limited set of extended events. It wasn't until 2012 that they could entirely replace profiler. Plus no GUI at all in 2008 (unless you download an SSMS extension)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 11, 2015 at 12:54 am
Hey .. i hv tuned some highes queries and the application still slow ..i am thinking there might be something outside database .. btw this is CRM application .
November 11, 2015 at 1:09 am
One person's "slow" is another person's "fast" :).
If someone is able to define a "slow" period, collect dm_os_wait_stats close to start.
If someone is able to define when a slow period ends, collect a second dm_os_wait_stats at the end.
Dm_os_wait_stats accumulates its metrics from the last restart of SQL Server. You either have to clear those metrics at the start, or you have to calculate the difference between start time wait time and end time wait time. Lots of scripts to do this. Paul Randal's http://www.sqlskills.com/blogs/paul/wait-statistics-or-please-tell-me-where-it-hurts/ is a good start. If you see significant ANSYNC_NETWORKIO waits, SQL Server is waiting for sessions (the application) to fetch results.
Viewing 15 posts - 1 through 14 (of 14 total)
You must be logged in to reply to this topic. Login to reply