February 1, 2013 at 11:43 am
I'm aware that it is generally accepted to let SQL Server control locking in many cases but recently I've had a few servers experiencing slowness. I've looked at all sorts of ways to improve the situation but in some ways my hands are tied. They are because the database(s) isn't something we designed and we can't just do what we want with it unless we want to blow up the support agreement we have with the application vendor and that can't happen.
In addition to slowness deadlocks have been occurring. I've been able to examine the existing indexes, identify missing indexes and find indexes that are either useless or redundant. This is an OLAP app so over indexing isn't wanted. During this work I've noticed that page lock escalations have caused a lot of wait times. One table in particular has over a billion rows and it is constantly being bulk inserted in to and queried at the same time.
So, I've done what I can (for now) as far as adding or tweaking existing indexes but one of the biggest improvements has come from altering a given index to not allow page locks (I have left row locks on). Because these servers can't be taken down for maintenance regularly and because we are running standard edition (no online rebuild) page locks has to be turned back on to allow a reorg of the index. I have a procedure that looks at the index fragmentation and if it meets the criteria it will switch page locks on, reorg the index and then turn it back off.
My deadlocks are gone, my application speed has increased dramatically (a particular process took 3 1/2 minutes and is now down to 30 seconds for example). Over all things are looking just great. However, I'd like to be sure that I'm not fooling myself with the page locks and end up setting myself up for something bad to come. As mentioned, I didn't design the schema, the app, etc so I'm bound by what I can do and still have it supported by the vendor.
Having said all that, what do you all think of this strategy?
Cheers
February 1, 2013 at 11:53 am
If you're already seeing lock escalation, disabling page locks will likely make that escalation worse.
With page locks allowed, SQL can take locks at row, page or table. If it find that too many locks have been acquired or lock memory is low, then those row or page locks will be escalated to table.
Disable page locks and you force SQL to either take more locks (thus more chance of escalation) or to just start with table locks (thus less concurrency)
Have you considered enabling read committed snapshot? Requires no code changes, just a database setting
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
February 1, 2013 at 12:15 pm
Thanks you Gail for the quick reply. I've selectively turned page locks off on certain indexes over the course of a few weeks and I haven't had any issues (yet) with locks escalating from row to table. As I kind of mentioned, it all seems to be going too well and that prompted my post.
I hadn't considered read committed snapshot as it never even occurred to me. What would you advise I look out for with that setting? Nearly all of the servers are in simple recovery mode and not replicated or log shipped but one server has a database in full recovery mode because it is log shipped to a DR site and some parts of it are replicated to a reporting server. Other than that our environment isn't too complex. We just have a lot of data coming in. Tons of it.
Cheers
February 1, 2013 at 12:46 pm
Just increased TempDB usage. The snapshot isolation levels use row versioning instead of locks, and those row versions go into TempDB. Long-running transactions will result in the versions not getting cleaned up as quickly as they should, so just make sure you have plenty of room in TempDB if you choose to use it.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
February 1, 2013 at 1:07 pm
TempDB has plenty of room but unfortunately these servers with the exception of one don't have that database on it own array. It is on the same disk as the user database data files (RAID 10) with the logs on a RAID 1 array. All that was setup before I started here and as budgets get better we will be able to get better servers but for now it is what it is. If increased tempdb activity is the price to be paid for the benefits offered then I'm in favor of switching this on.
One last question. I read that when executing the enable RCS command that there can't be any other connections or transactions open in that database. Then the article goes on to basically say it (the command) will run but only takes effect for new transactions. I'm not sure what to believe there so I'll test it on my own machine.
Cheers
February 1, 2013 at 3:03 pm
jfogel (2/1/2013)
One last question. I read that when executing the enable RCS command that there can't be any other connections or transactions open in that database.
Correct
Then the article goes on to basically say it (the command) will run but only takes effect for new transactions.
That's the 'Allow snapshot isolation' setting
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
February 1, 2013 at 3:23 pm
GilaMonster (2/1/2013)
jfogel (2/1/2013)
One last question. I read that when executing the enable RCS command that there can't be any other connections or transactions open in that database.Correct
Then the article goes on to basically say it (the command) will run but only takes effect for new transactions.
That's the 'Allow snapshot isolation' setting
Keep in mind that when you enable RCS, you're the only one active in the db: thus, all trans after that are "new" automatically :-).
You should also insure you have sufficient free space in any data page that will (may?) get updated after RCS is enabled, since RCS requires an extra 14 bytes per row. Otherwise you may get a nasty performance shock from page splits.
And if you have any large data types -- such as varchar(max) -- I believe that the first update to any large data column -- any byte of it -- after RCS is turned on will force the entire large data to be rewritten with extra overhead bytes per page. Naturally this could really harm performance on that first update. [As I understand it, you don't get hit with any LOB overhead until the large value is actually changed, but that's just from reading; I don't have LOBs on my RCS dbs.]
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
February 1, 2013 at 4:12 pm
Thanks for the info. I know for a fact there are tables with varchar(max) and such and I've seen a few of the procedures that app calls where they set the isolation level. They even have nolock hint in certain places. They designed there app with little thought to good database design. This sort of change is certainly something that must go through a test system before I will deploy it in prod.
Cheers
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply
This website stores cookies on your computer.
These cookies are used to improve your website experience and provide more personalized services to you, both on this website and through other media.
To find out more about the cookies we use, see our Privacy Policy