Disk Reads excessive and causing blocking

  • I have a production system that has been running unchanged (as far as objects) for about 3 years. We are constanly adding data to the tables, and the system is taxed pretty hard with a not-so-efficient application. Unfortunately I have no controll over the indexes (except for rebuilding and reindexing) and have no control over the application code (we purchased the system from a vendor).

    Anyway, I made a change about two months ago where I switch from using the Reorganization options in a Maintenance plan (weekly) to rolling my own reorg routines that make use of DBCC SHOWCONTIG and DBCC INDEXDEFRAG. These routines are running successfully, but I am not sure they are doing the job.

    What I have found is that during heavy use some queries (which occasionally perform well) will hang and cause blocking to many other processes. During this time %disk Read Time stops its normal fluctuation between 0 and 30 and stays constant on about 25%. Occasionally it jumps to 90% and stays for a while. Once the process completes (sometime 45-50 seconds) the %Read TIme drops back down and looks normal again.

    Can anyone point me to some monitoring strategies to see if my indexes are causing this problem?


    Adam Goss
    DBA

  • Adam,

    If you used the standard showcontig/indexdefrag substantially similar to the one from Microsoft's site.  (I forget where exactly) then you probably don't have an index org problem.  You might be suffereing from a lack of proper indexes, but probably not a problem with the perfomance of the existing indexes.

    Did I say "probably" enough?  heh

    From what you describe, I am guessing that you are needing to read in lots of data to satisfy the problem queries.  But sometimes they perform well, you say.  So I am guessing (again) that when they do perform well, the "lots of data" are still in the buffer cache, and not so much when the same queries bog down at other times.

    What I would do is monitor a SQL server performance counter called "Page life expectancy".  This is a measure of how long, on average, a cached page of data will remain in the cache before it is flushed out by a need for data that is not cached. (a.k.a. Must be retrieved from disk). 

    Watch what happens to this counter when the problem queries are running.   If it drops below 300 or so, it is very likely that you will get a significant performace boost by adding memory to your server.  According to Microsoft, this counter should always stay above 300 if you have enough memory on your system.

    Another option would be to get rid of some of the data in your system, if that is allowed.  Perhaps you have 4 years worth of data and you really only need 2.

    But first, check that PLE counter and let us know what you see.

    Good luck!

    jg

     

     

     

     

  • Jeff,

    You were right on... I added the PLE counter to PerfMon and when I observed the sloness and saw the PLE Counter drop from its normal value of about 800-900 to around 50.

    Will adding memory help this issue?

    I don't have the option of archiving data. This system is a repository and once the data is in it doesn't get modified very often but must remain indefinitely.

    Let me know if you have ideas on improving the PLE and if you think additional RAM with help.

    Thanks again,

    Adam


    Adam Goss
    DBA

  • Adding RAM will definitely improve the PLE, as long as SQL server can address it.  Depending on the OS and SQL version, you may face certain hard limits.  Now the question is will it do enough to make this problem go away...

    Another solution, which is probably (There I go again) not a good idea is to pin the table in memory.  This can solve problems where a table that needs to be accessed quickly but perhaps not very often gets retired from the buffer cache sooner that you would like it to be.  You can cause SQL server to not let the pages expire that are allocated in memory on a table-by-table basis using the DBCC PINTABLE command.  This would be a last resort, but I mention it first because there is another way to achieve a similar effect without resorting to such brutish tactics.  Use of DBCC PINTABLE can cause your SQL server to go out of service if the table being pinned grows to exceed the available memory.

    I have found situations in which I could solve this kind of problem by scheduling a query that does nothing useful except access all of the data in a particular table often enough to keep it in the cache.

    So, I'd try adding memory first, then look at what else might need to be done if that doesn't fix the problem completely.

    jg

     

  • Thanks again...

    I am on a mission to find memory for this server. I bet you could guess that it is a few years old. It may be tough to find matching memory.

    Adam


    Adam Goss
    DBA

  • Hey Adam,

    If you're adding more memory to the SQL server and you want it to address more that 2GB, check out this article from Microsoft: http://support.microsoft.com/default.aspx?scid=kb;en-us;274750

    We've recently run into this issue with servers that have 8+ GB of RAM on the machine however SQL Server is only using ~1.5 GB RAM.

    Cheers,

    Darryl

  • Darryl,

    Thanks for the heads up. Thsi could be an issue, I have running SQL 2000 Enterprise Edition on Windows 2000 Server. It sounds like without the AWE option enabled I can address 2Gb and with that option I could get up to 4Gb.

    Thanks again for the update. I have 2 Gb now and if I can get my hands on another 2Gb I may be in pretty good shape.

    Adam


    Adam Goss
    DBA

  • Just a point to note - even though this is a vendor product, you can still create additional indexes if you identify a need for them.

    HOWEVER, the vendor may then be difficult to deal with if you run into other problems.

    What we have done in similar situations is to identify poor performing bits of SQL code (vendor supplied I might add) and raise a problem with the vendor, recommending that an index be placed on appropriate columns. We've then requested that they confirm this is a valid approach, and that they are happy for us to create the index (or they supply a script) and that it will not cause any support issues etc. If the vendor agrees to this approach, they usually roll out the new index with the next version or release of their software.

    It's worth a try.

  • Adam,

    With SQL2000 Enterprise on Windows 2000 Server you can give up to 3Gb to the SQLServer without having to resort to the AWE extension. However, you WILL have to use the /3Gb switch in the boot.ini file (otherwise user processes are capped at 2Gb maximum). You cannot go higher than 4Gb using Windows 2000 Server. You'll need either Windows2000 Advanced server (8Gb), Datacenter (64Gb), or Windows 2003 Enterprise (64Gb). For all of these, you would need to use the AWE extension.

  • Thanks guys...

    Turns out that working with the vendor yielded some new indexes and performance is much better. I Still think there is plenty of room for imporvement, but I'm much happier now.

    I may still add the RAM. Thanks Philip for clarifing. It seems that documentation on this is lacking a bit. Plenty of articles but most spend their words describing the benefits of Advanced Server or Datacenter and leave the capabilities of 2000 Standard to be inferred.

    Adam


    Adam Goss
    DBA

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply