Options for dealing with table Contention

  • Only if you are experiencing allocation contention - which can still happen on tempdb, but I have never personally seen it on a user database.

    Paul mentions that SS2K8 has some parallel enhancements here.

    Can't see any reference to that in the link...?

    My standard response to anyone thinking of using multiple files per filegroup is don't.

    It can be useful to split things into filegroups on the same LUN, but only from a manageability point of view.

    Reference: http://blogs.msdn.com/psssql/archive/2007/02/21/sql-server-urban-legends-discussed.aspx

  • CirquedeSQLeil (4/5/2010)


    Garadin (4/5/2010)


    As of right now, our entire DB is in one file/filegroup. Would splitting this into multiple files provide any benefit in this regard? They'd all still be on the same raid array. Probably an extremely silly question, but I have very little knowledge of that area.

    I think you may see some advantages. I moved a DB from 1 Filegroup to several filegroups. The files remained on the same RAID array and we were able to see an increase in performance. The increase was minimal but was present (1%).

    Best results would be to do as Steve suggested - separate Arrays or LUNS. One database that I did this too was on a SAN and we just split the filegroups up to two luns and saw a 5% improvement.

    The idea being to split the busiest tables onto different filegroups? IE. If I have 3 super busy tables, create 3 filegroups and put one on each?

    Seth Phelabaum


    Consistency is only a virtue if you're not a screwup. 😉

    Links: How to Post Sample Data[/url] :: Running Totals[/url] :: Tally Table[/url] :: Cross Tabs/Pivots[/url] :: String Concatenation[/url]

  • Garadin (4/5/2010)


    CirquedeSQLeil (4/5/2010)


    Garadin (4/5/2010)


    As of right now, our entire DB is in one file/filegroup. Would splitting this into multiple files provide any benefit in this regard? They'd all still be on the same raid array. Probably an extremely silly question, but I have very little knowledge of that area.

    I think you may see some advantages. I moved a DB from 1 Filegroup to several filegroups. The files remained on the same RAID array and we were able to see an increase in performance. The increase was minimal but was present (1%).

    Best results would be to do as Steve suggested - separate Arrays or LUNS. One database that I did this too was on a SAN and we just split the filegroups up to two luns and saw a 5% improvement.

    The idea being to split the busiest tables onto different filegroups? IE. If I have 3 super busy tables, create 3 filegroups and put one on each?

    That was the design approach I took. Even with one Array, this can setup the work for later should you be able to add an additional array. Splitting the busiest tables into different filegroups and then subsequently (when funding or hardware permits) onto different LUNS.

    The idea behind it is to get additional IO response time (don't confuse it with threads).

    I also like the downstream benefits of PieceMeal restore and Filegroup backups.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • Paul White NZ (4/5/2010)


    Only if you are experiencing allocation contention - which can still happen on tempdb, but I have never personally seen it on a user database.

    Paul mentions that SS2K8 has some parallel enhancements here.

    Can't see any reference to that in the link...?

    My standard response to anyone thinking of using multiple files per filegroup is don't.

    It can be useful to split things into filegroups on the same LUN, but only from a manageability point of view.

    Reference: http://blogs.msdn.com/psssql/archive/2007/02/21/sql-server-urban-legends-discussed.aspx

    That is an excellent read on the topic. I have had it bookmarked for some time.

    The thing I like about it is the clarification it puts out about the additional IO threads that don't really get assigned to each filegroup as the myth portrays. This is not to say that an increase in performance can't be seen though.

    From that link:

    Adding more files and properly aligning them with storage can increase the I/O performance for a variety of reasons but achieving new I/O threads per file is NOT one of them.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • CirquedeSQLeil (4/5/2010)


    "Adding more files and properly aligning them with storage can increase the I/O performance for a variety of reasons but achieving new I/O threads per file is NOT one of them."

    Oh I agree - but the discussion gets very technical and hardware-specific from there on in.

    I do agree that having separate file groups is an excellent idea for recovery (EE only) and as a way to prepare for future hardware expansion. There are almost always better places to focus performance-improvement efforts, however.

  • Paul White NZ (4/5/2010)


    CirquedeSQLeil (4/5/2010)


    "Adding more files and properly aligning them with storage can increase the I/O performance for a variety of reasons but achieving new I/O threads per file is NOT one of them."

    Oh I agree - but the discussion gets very technical and hardware-specific from there on in.

    I do agree that having separate file groups is an excellent idea for recovery (EE only) and as a way to prepare for future hardware expansion. There are almost always better places to focus performance-improvement efforts, however.

    Yes agreed. Again, I think it comes down to an it depends scenario. What are the underlying factors that would drive one to this? I think Seth has probably exhausted some approaches and is looking for an alternative to help him get through the performance problems (he's not allowed to update code).

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • I would start by taking full table scans out of the picture by ensuring there are indexes in place to serve queries and updates.

    I would also create/recreate indexes with a low fill-factor so to minimize the chances that different-concurrent updates hit the same page.

    _____________________________________
    Pablo (Paul) Berzukov

    Author of Understanding Database Administration available at Amazon and other bookstores.

    Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.
  • Edited the original post to include the following information, thanks to everyone for the great ideas suggested here.

    Separate FileGroups on the same Array

    Questionable benefit

    If there is a benefit it will likely be minor... 5%'ish.

    Index Changes

    Look at the indexes involved, make sure indexes are being used for the updates.

    Try to eliminate any hotspots.

    Vertical Partitioning

    Split the table and create a view with the old table name.

    Ideally the 2 separate physical tables will alleviate contention for reads of non-updated fields.

    Indexed Views

    Very unlikely to have any beneficial effect due to the simplicity of the plans involved.

    May actually have a detrimental effect because it is one more thing to keep up to date.

    Isolation Level Changes

    NOLOCK Hints or Transaction Isolation level Changes.

    Not possible to do this everywhere, but it may help alleviate the strain from the more commonly run procedures.

    After looking at this some more today, My current plan is to do the following:

    Ensure the updates to these fields are using the most efficient indexes possible. (To try to push the updates through as quickly as possible)

    Ensure no additional indexes that reference these fields exist. (To ensure no additional overhead is incurred updating other indexes)

    Between that, some other infrastructure changes we are making and a large amount of data compression (which may actually hurt, hard to tell until we get a load on the system), we are gonna start with that at the time of upgrade and evaluate whether or not there is still a problem.

    If there is still a problem, we'll try to track down the things most commonly waiting and change the isolation level.

    If the problem still exists, we'll look at possibly splitting filegroups or setting up a vertical partition. These might be somewhat hard to do after the system is in production, but I don't want to introduce too many changes at once.

    I may play with an indexed view or two... but I don't really think it is going to be a solution for this particular problem.

    [Edit] By "we", I mean me and my pet squishy stress ball named Fred. 😉

    Seth Phelabaum


    Consistency is only a virtue if you're not a screwup. 😉

    Links: How to Post Sample Data[/url] :: Running Totals[/url] :: Tally Table[/url] :: Cross Tabs/Pivots[/url] :: String Concatenation[/url]

  • Sounds like a good plan.

    Good luck to you and Fred.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • Heh... everyone forgot how to eliminate all contention for the table immediately... rename the table. 😛

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden (4/5/2010)


    Heh... everyone forgot how to eliminate all contention for the table immediately... rename the table. 😛

    That is too friendly - drop it altogether 😛

    Then again, the contention would just come up from the application.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • So data compression can help because it substantially lowers I/O. It sounds crazy since it will use more CPU to do the compression, but you end up saving resources if you go to disk often.

    If you're mostly in memory and CPU, then it might not help.

    Which reminds. Have you looked to see if you have CPU/Memory pressure or if the disk is really the biggest bottleneck? If you are more memory bound, you might need to start there.

  • Jeff Moden (4/5/2010)


    Heh... everyone forgot how to eliminate all contention for the table immediately... rename the table. 😛

    Not all contention... there's still the procs that scan for all tables containing specific field names and then perform operations on them ;-).

    Seth Phelabaum


    Consistency is only a virtue if you're not a screwup. 😉

    Links: How to Post Sample Data[/url] :: Running Totals[/url] :: Tally Table[/url] :: Cross Tabs/Pivots[/url] :: String Concatenation[/url]

  • Steve Jones - Editor (4/5/2010)


    So data compression can help because it substantially lowers I/O. It sounds crazy since it will use more CPU to do the compression, but you end up saving resources if you go to disk often.

    If you're mostly in memory and CPU, then it might not help.

    Which reminds. Have you looked to see if you have CPU/Memory pressure or if the disk is really the biggest bottleneck? If you are more memory bound, you might need to start there.

    I'm 99% sure it's IO based. I haven't looked in a little while now, but we were typically averaging only 10% CPU usage and (although this probably isn't the best indicator) routinely had a PLE of 2K+. I'd throw out specs for our disk setups, but I'd have to track them down.

    Seth Phelabaum


    Consistency is only a virtue if you're not a screwup. 😉

    Links: How to Post Sample Data[/url] :: Running Totals[/url] :: Tally Table[/url] :: Cross Tabs/Pivots[/url] :: String Concatenation[/url]

  • If that's the case, you might be surprised what compression does for you.

Viewing 15 posts - 16 through 30 (of 34 total)

You must be logged in to reply to this topic. Login to reply