Different Duration of the same queries

  • Hi all. I have already read about SQL Server Profiler. But I can't understand what means "Writes" there.

    For example, I have a query that has 13500 Reads and 0 Writes, I don't understand why I see 0 Writes. Does it mean there were no changes?

    I also have a query that is executed more than a thousand times a day, but I see different durations (from 2 to 35 seconds). What can affect this result? Maybe some blocking? If so, how I can investigate it?

    Thanks.

  • For the first question - writes refers to changing data.  a SELECT will have 0 writes, while an INSERT, UPDATE, or DELETE will have 0 or more writes.  A "write" is writing data to the database, a "read" is reading data from the database.

    As for duration, is it the EXACT same query OR are there differences?  A LOT of things can change duration such as blocking, data changes, stale statistics, parameter sniffing, changing parameters, etc.

    For example, if it is a stored procedure and the parameters are changing with each run, it may get a good plan and run fast when the parameter is a 1 but could have a bad plan and run slow when the parameter is 1000.  Or if the query is updating the value foo into a table name bar and then sleeping for 2 seconds all in a single transaction, and 10 people run that at EXACTLY the same time, the last person will need to wait 20 seconds for it to complete as the first 9 people need to run it before the 10th person can run it and it takes just over 2 seconds for each run (the insert would take some time too).

    My first step would be to look at the tools your company has in place for monitoring.  If they have some, check those for blocking.  If they don't, and you know the query, look for things that MAY cause blocking and determine if they can be fixed.  alternately, look for parameter sniffing problems, check execution plans, check for stale statistics or highly fragmented indexes (fragmented indexes may end up being a red herring though)...

    Blocking is easiest to monitor while it is happening, it is difficult to monitor after the blocking has been released.

    My first step would be to look at the query and determine if it is a stored procedure, if parameter sniffing MAY be a problem (does it run faster with some parameters and slow with others), replicate it on a test/dev system, and check the execution plans to see if there is room for improvement.

    A fun thing that can happen too is incorrect parallelism.  A query that runs parallel MAY result in slower results than a non-parallel query.  I found a query recently that gets a 10x performance boost on average by forcing it to be non-parallel.

    Another thing that can cause slowness is if the database, log, or tempdb needs to grow while the query is running which could result in performance degrading.

     

    If you can grab an execution plan when it is slow and when it is fast you can learn a lot as well.  For example, if the "fast" plan does an index seek and not much else but the slow one is doing an index scan, then you know the problem - either poor index or the WHERE clause is in a format that doesn't allow it to index in SOME cases.  Compare the queries and ensure they are identical in all other ways as it should produce the same plan for the same query assuming the same statistics.  If the plans appear identical, check the estimated rows vs actual.  If they are vastly different, you probably have some stuff dumping to TEMPDB that doesn't need to be.  update statistics and you should be good.  If the plans are 100% identical, then chances are it is resources on the server that are causing you grief OR database files are auto-growing OR you have blocking.  Blocking, while it is happenign, can be viewed by running EXEC sp_who2.  If there isn't blocking, check the free CPU, free memory, network I/O, and disk I/O when things are slow and fast to make sure that you have enough resources.  If you don't, you will need to adjust things so that you have more resources.

     

    My first step though for almost every database problem is to replicate it on test (if possible).  Then you have a playground to try out methods of improving the performance or fixing the issue.  You may get a performance boost by forcing it to be serial rather than parallel OR you may get a boost by forcing it to be parallel.  running it in test also gives you a chance to narrow down the issue.  You can run the query inside a transaction and roll it back without causing any blocking to the live system and changing almost no data.  If the table has an identity column, that will still get incremented when you roll things back.

    If you have auto-shrink enabled on any database, that will cause performance hits and other headaches; I recommend you disable it if it is enabled.

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • christi1711 wrote:

    Hi all. I have already read about SQL Server Profiler. But I can't understand what means "Writes" there. For example, I have a query that has 13500 Reads and 0 Writes, I don't understand why I see 0 Writes. Does it mean there were no changes?

    I also have a query that is executed more than a thousand times a day, but I see different durations (from 2 to 35 seconds). What can affect this result? Maybe some blocking? If so, how I can investigate it?

    Thanks.

    Reads and Writes are actually Page Reads and Page Writes.  In other words, the number of pages read (and they are LOGICAL Page Reads, meaning "from memory").

    For example... you might have code that reads 100 rows from a table but, if they all live on the same page, the system will only need to read from one page and "Reads" in SQL Profiler will only be "1" if it's a small table where everything fits on the "Root Page" of a B-Tree.  If it's a big table, you might see 4 or 5 "Reads" because it has to start at the "Root" page and go through the multi-level intermediate pages to get to the data page (Leaf Level of the index).  Heaps are a bit different but most people avoid them for multiple different reasons.

    The same hold true for "Writes" plus you may get some "Reads" only with it because it first has to traverse the B-Tree to figure out which page(s) to write to.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • You'll see tons of variation on the duration even if every other thing about the query is identical, the rows returned, the parameter values used, everything exactly the same. Why? Because of other processes on the system. Maybe statistics are being updated and your query has to wait a moment for that to clear. Maybe the statistics have been updated and they affect this query so it has to recompile prior to execution. Maybe another query is being run that uses a bunch of resources so this query has to wait. On and on, lots of reasons for variation. This is why it's a good idea to average the run times (and reads & writes) and get the standard deviation and then only worry when a given run of the query is far enough outside the standard deviation (more than 3 times outside is my default).

    And, since you're just getting started in SQL Server, can I suggest you skip Profiler. It's deprecated, which only means that no new functionality has been added to it for 12 years. All new functionality is in Extended Events. Plus, Extended Events does all the same stuff as Profiler (with a single exception), but does it with less load on the server. Even more, Extended Events offers functionality above and beyond what's possible in Profiler, from histograms to even correlation, it's a better tool.

    I don't try to budge people who have 20 years experience in Profiler (much), but since you're just getting going, your time would be far better spent learning an up-to-date, superior, tool set. Extended Events.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • One thing that few consider is that it might be "recompiles".  We had a query that was taking "only" (way too long in my eyes) 100ms to execute.  The problem was two fold, though...

    1. Because of the way the query was built, it would recompile on almost every run.
    2. Because of some data skew in the tables according to the embedded values that would change each and ever time, the recompiles took 2 to 22 SECONDS each an every time and it was being executed 2 to 8 times a second.  Even on a (then) 48 CPU box with SSDs, it kept things way too busy.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden wrote:

    One thing that few consider is that it might be "recompiles".  We had a query that was taking "only" (way too long in my eyes) 100ms to execute.  The problem was two fold, though...

    1. Because of the way the query was built, it would recompile on almost every run.
    2. Because of some data skew in the tables according to the embedded values that would change each and ever time, the recompiles took 2 to 22 SECONDS each an every time and it was being executed 2 to 8 times a second.  Even on a (then) 48 CPU box with SSDs, it kept things way too busy.

    Ha! We had one back in the day that ran in about 500ms (and we were actually happy with it at that point), but the compile time was 5-7 minutes. Man, that thing was a nightmare.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • This was removed by the editor as SPAM

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply