May 1, 2018 at 4:59 am
Dear Everyone
I have a custom build application running on .Net and its trying to access the company's SQL Server and the users are complaining that when they try to access its becoming slower than before.
I found some defragmentation and rebuilt the main indexes but i ran Randal's wait query and found the results below.
The main wait is:
WaitType Wait_S Resource_S Signal_S WaitCount Percentage AvgWait_S AvgRes_S AvgSig_S Help/Info URL
WRITELOG 4987.11 4076.76 910.35 6773580 39.34 0.0007 0.0006 0.0001 https://www.sqlskills.com/help/waits/WRITELOG
Any advice would be helpful
have a good day
Kal
May 1, 2018 at 5:03 am
SQL server 2012 running on Windows 2008 R2
I forgot to mention this and the wait results are attached
May 1, 2018 at 10:52 am
hurricaneDBA - Tuesday, May 1, 2018 4:59 AMDear Everyone
I have a custom build application running on .Net and its trying to access the company's SQL Server and the users are complaining that when they try to access its becoming slower than before.
I found some defragmentation and rebuilt the main indexes but i ran Randal's wait query and found the results below.The main wait is:
WaitType Wait_S Resource_S Signal_S WaitCount Percentage AvgWait_S AvgRes_S AvgSig_S Help/Info URL
WRITELOG 4987.11 4076.76 910.35 6773580 39.34 0.0007 0.0006 0.0001 https://www.sqlskills.com/help/waits/WRITELOGAny advice would be helpful
have a good day
Kal
You still need to do some more digging. It can be anything and slowness when "accessing the database" and checking the waits isn't going to tell you much.
The average wait time really isn't much - it's more the percentage of the waits. Those are hard to say as you would want a baseline to compare to. You can do a capture and then 24 hours later do another to get the waits for that 24 hour period. That would tell you more about the waits. The sqlskills site has an example of capturing for a time frame which you can modify to meet your needs.
Capturing IO latencies for a period of time
If the slowness is just when they first access the database (users typically won't know when they are accessing just the application or accessing the database), you may want to consider looking at what is being executing from the application at the beginning of that process and look into tuning those queries/stored procedures, check the query stats, etc
You may still want to check sys.dm_io_virtual_file_stats to see if a particular file is being hit hard or indicates a lot of stalls.
You can also run an extended events session or server side trace to capture a login process from the application and see what is taking the most time.
I usually check statistics first instead of fragmentation - it often seems to be an issue when user report things becoming slower over time.
Sue
May 3, 2018 at 3:05 am
Hi Sue
Thanks for the help as usual
I ran the query below and found the result that lots of io_stalls are there
SELECT a.io_stall, a.io_stall_read_ms, a.io_stall_write_ms, a.num_of_reads,
a.num_of_writes,
--a.sample_ms, a.num_of_bytes_read, a.num_of_bytes_written, a.io_stall_write_ms,
( ( a.size_on_disk_bytes / 1024 ) / 1024.0 ) AS size_on_disk_mb,
db_name(a.database_id) AS dbname,
b.name, a.file_id,
db_file_type = CASE
WHEN a.file_id = 2 THEN 'Log'
ELSE 'Data'
END,
UPPER(SUBSTRING(b.physical_name, 1, 2)) AS disk_location
FROM sys.dm_io_virtual_file_stats (NULL, NULL) a
JOIN sys.master_files b ON a.file_id = b.file_id
AND a.database_id = b.database_id
ORDER BY a.io_stall DESC
Yes before you mention all the datafiles, logs files, etc are on the F drive and i am working on moving them to other partitions as this server was setup without my knowledge
Kal
May 3, 2018 at 7:37 am
hurricaneDBA - Thursday, May 3, 2018 3:05 AMHi Sue
Thanks for the help as usual
I ran the query below and found the result that lots of io_stalls are thereSELECT a.io_stall, a.io_stall_read_ms, a.io_stall_write_ms, a.num_of_reads,
a.num_of_writes,
--a.sample_ms, a.num_of_bytes_read, a.num_of_bytes_written, a.io_stall_write_ms,
( ( a.size_on_disk_bytes / 1024 ) / 1024.0 ) AS size_on_disk_mb,
db_name(a.database_id) AS dbname,
b.name, a.file_id,
db_file_type = CASE
WHEN a.file_id = 2 THEN 'Log'
ELSE 'Data'
END,
UPPER(SUBSTRING(b.physical_name, 1, 2)) AS disk_location
FROM sys.dm_io_virtual_file_stats (NULL, NULL) a
JOIN sys.master_files b ON a.file_id = b.file_id
AND a.database_id = b.database_id
ORDER BY a.io_stall DESCYes before you mention all the datafiles, logs files, etc are on the F drive and i am working on moving them to other partitions as this server was setup without my knowledge
Kal
Now if you go back and look at that link to find the latency for a period of time, you want to capture those values again and then compare the two "snapshots" as he calls them in the article to find the latency between the two time periods. For a lot of the DMVs where the values are cumulative since server restart, you really have to capture one set and then capture again and find the difference so that you can find out the latency during that time periods. There can be things that happen that skew the results like server startup, some odd process that runs once and doesn't run well, etc.
You can run that first part before the waitfor delay and use a permanent table. That gives you the baseline to start with. And then run the second part after the waitfor delay something like a day later - you just modify the script to use permanent tables. So when you run it the second time, you get the difference between the two.
You could always just create the tables ##SQLskillsStats1, ##SQLskillsStats2 as permenant tables so then you just remove the ## in the script. If you resuse the script you come up with to compare, you need to remember to truncate the tables before capturing the disk stats
In terms of everything on one drive...that may or may not be an issue. It depends upon what your storage is for that server. If you are using a SAN, a lot of times when you get other drives it is just from the same pool of disks so it really won't make a difference. Direct attached storage or local disks, it would make a difference.
Sue
May 5, 2018 at 10:18 pm
Sue_H - Tuesday, May 1, 2018 10:52 AMhurricaneDBA - Tuesday, May 1, 2018 4:59 AMDear Everyone
I have a custom build application running on .Net and its trying to access the company's SQL Server and the users are complaining that when they try to access its becoming slower than before.
I found some defragmentation and rebuilt the main indexes but i ran Randal's wait query and found the results below.The main wait is:
WaitType Wait_S Resource_S Signal_S WaitCount Percentage AvgWait_S AvgRes_S AvgSig_S Help/Info URL
WRITELOG 4987.11 4076.76 910.35 6773580 39.34 0.0007 0.0006 0.0001 https://www.sqlskills.com/help/waits/WRITELOGAny advice would be helpful
have a good day
KalYou still need to do some more digging. It can be anything and slowness when "accessing the database" and checking the waits isn't going to tell you much.
The average wait time really isn't much - it's more the percentage of the waits. Those are hard to say as you would want a baseline to compare to. You can do a capture and then 24 hours later do another to get the waits for that 24 hour period. That would tell you more about the waits. The sqlskills site has an example of capturing for a time frame which you can modify to meet your needs.
Capturing IO latencies for a period of timeIf the slowness is just when they first access the database (users typically won't know when they are accessing just the application or accessing the database), you may want to consider looking at what is being executing from the application at the beginning of that process and look into tuning those queries/stored procedures, check the query stats, etc
You may still want to check sys.dm_io_virtual_file_stats to see if a particular file is being hit hard or indicates a lot of stalls.
You can also run an extended events session or server side trace to capture a login process from the application and see what is taking the most time.
I usually check statistics first instead of fragmentation - it often seems to be an issue when user report things becoming slower over time.Sue
Hi Sue
I tested to run Randals script and it will keep executing for the time specified in the WAITFOR DELAY '00:30:00';
That will make the server slower or am i mistaken especially if we keep it for 24 hours.
As for the statistics comment you mentioned at the end, do i refresh the statistics for that specific database? Is that all were to do?
Kal
May 7, 2018 at 11:39 am
hurricaneDBA - Saturday, May 5, 2018 10:18 PMSue_H - Tuesday, May 1, 2018 10:52 AMhurricaneDBA - Tuesday, May 1, 2018 4:59 AMDear Everyone
I have a custom build application running on .Net and its trying to access the company's SQL Server and the users are complaining that when they try to access its becoming slower than before.
I found some defragmentation and rebuilt the main indexes but i ran Randal's wait query and found the results below.The main wait is:
WaitType Wait_S Resource_S Signal_S WaitCount Percentage AvgWait_S AvgRes_S AvgSig_S Help/Info URL
WRITELOG 4987.11 4076.76 910.35 6773580 39.34 0.0007 0.0006 0.0001 https://www.sqlskills.com/help/waits/WRITELOGAny advice would be helpful
have a good day
KalYou still need to do some more digging. It can be anything and slowness when "accessing the database" and checking the waits isn't going to tell you much.
The average wait time really isn't much - it's more the percentage of the waits. Those are hard to say as you would want a baseline to compare to. You can do a capture and then 24 hours later do another to get the waits for that 24 hour period. That would tell you more about the waits. The sqlskills site has an example of capturing for a time frame which you can modify to meet your needs.
Capturing IO latencies for a period of timeIf the slowness is just when they first access the database (users typically won't know when they are accessing just the application or accessing the database), you may want to consider looking at what is being executing from the application at the beginning of that process and look into tuning those queries/stored procedures, check the query stats, etc
You may still want to check sys.dm_io_virtual_file_stats to see if a particular file is being hit hard or indicates a lot of stalls.
You can also run an extended events session or server side trace to capture a login process from the application and see what is taking the most time.
I usually check statistics first instead of fragmentation - it often seems to be an issue when user report things becoming slower over time.Sue
Hi Sue
I tested to run Randals script and it will keep executing for the time specified in the WAITFOR DELAY '00:30:00';
That will make the server slower or am i mistaken especially if we keep it for 24 hours.As for the statistics comment you mentioned at the end, do i refresh the statistics for that specific database? Is that all were to do?
Kal
If you use that script, it captures for 30 mins. While it's in the Waitfor delay period, it won't hurt performance. It sits in a suspended state not using any resources. The primary reason I use static tables is in case something happens during that time frame and you don't want to loose the data. And I also have a baseline to measure from if I want to look 24 hours later or a week later and see what the averages are. Or sometimes I do a few to static tables to compare different time periods...things like that.
What stats to update depends more on how often the data is inserted, updated deleted and not necessarily the last time they were updated. After awhile, you get used to knowing which tables/indexes to hit. There are a lot of scripts available for checking stats - most have the last time they were updated as well as the number of updates. If you update individual stats you get an idea of which updates had what type of impact on performance. And I usually check query plans of things executed the most often. Out of data statistics often have the estimated row count and actual row count very different. The things executed most often are generally the pieces to pay most attention to. A couple of scripts to check statistics:
How to Find Outdated Statistics?
How Stale are my Statistics?
Another one related to some of the properties and related effects:
How Old Are Your Database Statistics?
Sue
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply