January 5, 2011 at 8:08 am
How can we quantify the resources used by Locks when doing a simple select statement filtered on a Clustered index. I was not able to get info running a profiler. It just did not show the cost. I can see the Locks Aquired but no Costs info.
Thanks
-Roy
January 5, 2011 at 8:29 am
According to BOL a lock takes approximately 100 bytes of memory, so I guess that you can check who much locks were used and get estimation about the memory usage. Of course this will give you a rough estimation and won't give you any knowledge about CPU. By the way – why do you need this information?
Adi
--------------------------------------------------------------
To know how to ask questions and increase the chances of getting asnwers:
http://www.sqlservercentral.com/articles/Best+Practices/61537/
For better answers on performance questions, click on the following...
http://www.sqlservercentral.com/articles/SQLServerCentral/66909/
January 5, 2011 at 8:41 am
Thanks... For a blog I am planning to write.
-Roy
January 5, 2011 at 9:07 am
I don't know of any documentation on that. You could probably get an approximation through donkey-work. Take two copies of the same database, set one to Read-Only (which means SQL Server will not take locks in it), and perform sets of queries on both, measuring costs in each. This won't have much accuracy, so make sure your precision on it is low.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
January 5, 2011 at 9:47 am
Thanks Gus. It would give me a rough idea what is the cost for locking.
-Roy
January 5, 2011 at 3:13 pm
Hey Roy,
This is a complex topic, but the most useful general point I can make is that locking tends to add CPU overhead rather than anything else. Alex Kuznetsov wrote about it: http://sqlblog.com/blogs/alexander_kuznetsov/archive/2008/07/12/cpu-overhead-for-higher-isolation-levels.aspx
Linchi Shea also wrote about performance differences with read-only databases here: http://sqlblog.com/blogs/linchi_shea/archive/2007/10/01/performance-impact-setting-a-database-to-read-only.aspx - be sure to read the comments and follow-up posts on that one.
The other thing to bear in mind is that locking only really gets expensive when there are lots and lots of locks. That is most often the case if the engine chooses row locks, but these can often be skipped, as I know you know from reading http://sqlblog.com/blogs/paul_white/archive/2010/11/01/read-committed-shared-locks-and-rollbacks.aspx
Anyway, definitely a subject worth writing about, but you have to take a wide range of subtleties into consideration. Good luck.
Paul
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
January 5, 2011 at 11:38 pm
GSquared (1/5/2011)
I don't know of any documentation on that. You could probably get an approximation through donkey-work. Take two copies of the same database, set one to Read-Only (which means SQL Server will not take locks in it), and perform sets of queries on both, measuring costs in each. This won't have much accuracy, so make sure your precision on it is low.
I guess another way to accomplish the same thing is simply with 2 queries using WITH(TABLOCKX) on one of the queries so it only takes a single lock.
--Jeff Moden
Change is inevitable... Change for the better is not.
January 6, 2011 at 6:06 am
Thanks Jeff, Thanks Paul. Now I can do some more checking before I write some blunder on my blog post. 🙂 It is not a complicated blog post I am going to do. Couple of simple blogs regarding locks.
-Roy
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply