December 7, 2011 at 9:53 pm
Comments posted to this topic are about the item sp_prepexec and nHibernate
December 8, 2011 at 1:07 am
Good article, and better than its rating indicates. (So I gave it 5 * to balance it.)
A few days ago I captured this in a trace, when a customer complained of a few time-outs. I suspect that one of their developer uses a self-made program to query the database for our application:
declare @p1 int
set @p1=1
declare @p2 bigint
set @p2=175483777308924
exec [sys].sp_getschemalock @p1 output,@p2 output,N'"dbo"."customer"'
select @p1, @p2
declare @p1 int
set @p1=41
exec sp_prepexec @p1 output,N'@P1 int',N'SELECT "Tbl1005"."customer_id" "Col1127","Tbl1005"."code" "Col1136" FROM "dbo"."customer" "Tbl1005" WHERE @P1="Tbl1005"."customer_id"',99534
select @p1
go
exec sp_execute 41,99551
go
>>> Many dozens of sp_execute....
exec sp_unprepare 41
go
exec [sys].sp_releaseschemalock 1
go
And within a few seconds the same code, and again, and again...
I would very much like to know what the effect is of this snippet:
exec [sys].sp_getschemalock @p1 output,@p2 output,N'"dbo"."customer"'
... but haven't found the time to investigate it.
December 8, 2011 at 1:37 am
I've seen something like this before with linked servers.
In that particular instance the code below alleviated the problem
EXEC master.dbo.sp_serveroption @server=N'YourServer', @optname=N'lazy schema validation', @optvalue=N'true'
Thanks for the star rating. If people want to give an article a low rating that's fine, but it would help if they said why.
December 8, 2011 at 1:45 am
Excellent article, thank you for providing a rationale for the gut feeling acquired when working with ORM developers.
December 8, 2011 at 4:16 am
Great article as always Dave. When you spend a lot more time with nHibernate there are plenty of interesting/challenging things to find. I forget the details now but when we made the changes about 2 years ago to use the 'prepare_sql' option we managed to kill our site so we had to undo that change quickly. I'll admit our developers use of this ORM tool seems to me as pretty bad as I have had a long standing battle to get them to either use it properly or bin it and go with something better. Don't get me wrong, we've got some extremely good developers and some are the best I've worked with. However, SQL remains a black box for most of them.
The queries you get when tracing are nigh on unreadable and needs a fair bit of formatting before you can actually work with it and I find that a lot of the queries I see are very long so this makes this more time consuming. When you look at the way it approaches aliases, it's not something easy on the eye. It often lumps statements together but there is no visible seperation/formatting in the code.
One of the main problems I see with nHibernate is the way it approaches parameters for IN statements. I fully admit it's most likely our developers way of using it, but essentially when you pass in a list of ID's they come through in the underlying query as 'SELECT blah from myTable WHERE Id in (@p0, @p1, @p2.....)'. This ends up with a different hash of the execution plan for each variation of the same query. If I pass in 100 parameters I get one variation. If I pass in 101 I get another identical plan, albeit with a different hash so technically it's a different plan. As such we spend most of our time compiling execution plans that we only use once. Imagine the above scenario whereby we get up to 2000 params and you can see the problem (2000 potential different variations on a query). Yes, we've actually maxed out on params passed to a statement. I reckon we use 80% of the plans once before they are aged out. I've been told that the more recent versions have addressed this but I haven't seen it yet. The width of an nVarChar seems to cause it confusion. If I have a param with 7 characters I get one variation with the query using a param of nVarChar(7). If I change the value being passed in to 8 characters it changes the param in the query to nVarChar(8). Combine the two of these comments above and it gets silly. I haven't yet found anyone to agree to setting this to (and here is my crazy idea) the width of the column.
When I run a side by side comparison with properly sized params and passing in a table variable or even a varbinary and parsing that into a table variable the performance difference is huge. Then with plan re-use, a proper version becomes so much more scalable and performant.
I challenged our developers to find a suitable solution and that I would buy lunch for whoever solved it in nHibernate. My money remains in my pocket.
I found an interesting read on http://www.codeproject.com/KB/database/NHibernate_Perf2.aspx and figured that I could use nHibernate's HBM file to set a really performant query in the config and map the output to the object that the developers wanted. For me this seemed to work. The developers I've spoken to have said no, because they don't want to alter the HBM config file. Argghhhh ! It's a shame because if I'm right this would solve it for us all and allow the query to be source controlled along with the code.
The end conclusion I've come to after just over 2 years is that ultimately it does save time when coding, but for a high concurrency site it gives us a glass ceiling that will hamper us at some point. I can counter a decent chunk of this, but we give up a lot of tuning options, diagnosis, performance and so on. I've had to resort on some occasion to figuring out patterns of index use over time intervals to try and reverse engineer things so that I can guess what query is likely to be used and then trying to help figure out where the query came from so that we can track it back to the code. Do-able, but not exactly fun. Finding and diagosing a problem nHibernate query turns it into finding a needle in a stack of needles or death by a thousand cuts.
Overall, it's fine for gaining development speed but developers seem to forget the trade off of them doing this work (they only want the initial development benefit, not the full cost) so we effectively give up any hope of making the queries perform well when you throw load at them. I DO think it can work, but I haven't seen anything impressive from the DBA side of things yet. It's school dinner coding. Everyone gets fed, but it's not exactly Claridges is it ?
Recently, we've been putting together something new so I took the approach of tuning the hell out of everything before seeing if they could get nHibernate to match it. The answer was quick and simple as it was 'no'. We went with using a call to a stored proc. It was surprisingly easy to call this in code.
December 8, 2011 at 4:23 am
I have a couple of questions about this article:
1.) Are the plans cached by sp_prepexec stored in the same place as the plans cached when using Stored Procedures?
2.) Do you think that it is a bug with SQL Server that plans created using sp_prepexec are not re-used across connections?
3.) Does the "per connection" plan re-use behaviour change between versions of sql server 2000/2005/2008?
4.) If you are calling SP's from ADO.net are the plans re-used across connections in a web application?
December 8, 2011 at 4:44 am
Excellent write up. 5 Stars. Useful information for anyone that has to deal with this stuff.
I'm not an nHibernate expert, nor do I play one, anywhere, but I'm pretty sure the prepare statements are necessary to avoid the issues you get with parameter sizing in nHibernate. Here's a link to solution[/url] and here's one[/url] to the problem.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
December 8, 2011 at 4:50 am
ryan.offord (12/8/2011)
One of the main problems I see with nHibernate is the way it approaches parameters for IN statements. I fully admit it's most likely our developers way of using it, but essentially when you pass in a list of ID's they come through in the underlying query as 'SELECT blah from myTable WHERE Id in (@p0, @p1, @p2.....)'. This ends up with a different hash of the execution plan for each variation of the same query. If I pass in 100 parameters I get one variation. If I pass in 101 I get another identical plan, albeit with a different hash so technically it's a different plan. As such we spend most of our time compiling execution plans that we only use once. Imagine the above scenario whereby we get up to 2000 params and you can see the problem (2000 potential different variations on a query). Yes, we've actually maxed out on params passed to a statement. I reckon we use 80% of the plans once before they are aged out. I've been told that the more recent versions have addressed this but I haven't seen it yet.
I use NHibernate on a daily basis and I sometimes need to use the IN clause and indeed the behaviour is as your say. Could you give me an example using SQL Server 2008 of what you would expect to see instead of the below?
select * from TableA where id in (@p1, @p2,@p3)
ryan.offord (12/8/2011)
The width of an nVarChar seems to cause it confusion. If I have a param with 7 characters I get one variation with the query using a param of nVarChar(7). If I change the value being passed in to 8 characters it changes the param in the query to nVarChar(8). Combine the two of these comments above and it gets silly. I haven't yet found anyone to agree to setting this to (and here is my crazy idea) the width of the column.
This has been fixed in the versions of NHibernate since 2.1 I believe by setting the parameter sizes by defult and not just when prepare_sql is set to true (you will also need to make sure you have defined your "length" attributes in your mapping files). Your developers may want to consider updating.
December 8, 2011 at 5:15 am
rough example as it's nearly lunchtime
nHibernate version
==================
(this is roughly what it comes out as - made slightly easier for reading)
DECLARE @p0 INT --etc...for each param
SET @p0 = 1234 --and so on
SELECT * FROM Blah WHERE (this_.ID IN(@p0 , @p1 , @p2 , @p3 , @p4 , @p5 , @p6 , @p7 , @p8)
Better version
==============
DECLARE @BinaryIdListInput VarBinary(8000)
SET @BinaryIdListInput = '1234, 5678' --etc...
CREATE PROCEDURE GetMyList (@BinaryIdList varbinary(8000))
AS
DECLARE @IdList TABLE (Id INT PRIMARY KEY CLUSTERED)
INSERT INTO @IdList(Id) SELECT PIA.Value FROM ParseIntegerArray(@BinaryIdList, 4) AS PIA
SELECT * FROM Blah WHERE (this_.ID IN(SELECT Id From @IdList))
I'd then call a stored proc using EXEC GetMyList @BinaryIdList = @BinaryIdListInput
There is a more performant way of doing this with joining on the table variable, but the above gives you the closest comparable. The single version of the execution plan comes from passing in the varbinary in this case (I've set it above to simplify things a touch for reading)
I do use a function to parse the array and this is as follows :
CREATE FUNCTION [ParseIntegerArray]
(
@BinaryIMAGE
,@ElementSizeTINYINT
)
RETURNS @Array TABLE
(
[Index] SMALLINT IDENTITY(0,1)
,[Value] VARBINARY(8) NOT NULL
)
AS
BEGIN
IF ( @ElementSize IS NOT NULL AND @ElementSize IN ( 1, 2, 4, 8 ) )
BEGIN
DECLARE @Value VARBINARY(8)
,@Length SMALLINT
,@Index SMALLINT;
-- initialize variables
SET @index = 1;
SET @Length = DATALENGTH( @Binary );
-- extract values
WHILE ( @index <= @Length )
BEGIN
-- get value
SET @Value = SUBSTRING( @Binary, @index, @ElementSize );
-- insert into table
INSERT INTO @Array VALUES( @Value );
-- increment index
SET @index = @index + @ElementSize;
END;
END;
RETURN;
END
GO
December 8, 2011 at 5:33 am
I realise this is not strictly on topic but we use a very similar tactic but instead use varchar(8000) parameters using a dbo.Split function which utilises a number table. This removes the need for a loop in your function. Also why varbinary over a varchar(8000) or similar?
December 8, 2011 at 6:40 am
Nice article.
We could not keep up with performance issues caused by using hibernate ("experts" consulted, etc.). We finally convinced our management that hibernate is a development tool and not intended for a production environment. The development staff now works with the DBA staff who develops the SQL code. Performance at the app and db layers is much improved.
December 8, 2011 at 6:41 am
Thx David, Excellent article. I gave a five star rating for it.
We are using nHybernate here. We have noticed blotting of the plan cache. There fore the first thing I did was to set Optimization for Ad-Hoc queries. Then clear the cache. I have not seen much blotting after that.
I also made sure that the developers used the right data types when passing the parameter. I am also thinking of passing in the in parameters as table valued parameters.
-Roy
December 8, 2011 at 7:26 am
CyberWomble (12/8/2011)
I have a couple of questions about this article:1.) Are the plans cached by sp_prepexec stored in the same place as the plans cached when using Stored Procedures?
2.) Do you think that it is a bug with SQL Server that plans created using sp_prepexec are not re-used across connections?
3.) Does the "per connection" plan re-use behaviour change between versions of sql server 2000/2005/2008?
4.) If you are calling SP's from ADO.net are the plans re-used across connections in a web application?
sp_PrepExec stores its plans in the same place. You can actually store your query in a @variable and run it through sp_PrepExec and sp_ExecuteSQL and see one reuse the others plan. It matters not whether the query is called from separate sessions or applications providing the plan is absolutely identical. Bear in mind the cache is case sensitive.
What little documentation for sp_PrepExec describes the use of its plan as being local but I found that it is the handle that is local and not the plan. If that's a bug then please Microsoft DON'T FIX IT!
I haven't tested the behaviour against SQL2000 or 2005. I probably should but as an architect I'm desparately trying to avoid encouraging the use of 7+ year old technology.
December 8, 2011 at 8:29 am
I have seen issues at clients where the prep/unprep stuff was a huge perf issue because it was getting called a bajillion times for stuff that was only executed ONCE! It is ridiculous to go through the round trips and extra overhead to prep something and then tear that prep down just to make ONE call to the database!!
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
December 8, 2011 at 1:46 pm
Great article. I also rated it 5.
I tried looking through DMVs, Google, etc trying to find any mention of that handle but NOTHING!! 🙁
David.Poole (12/8/2011)
If people want to give an article a low rating that's fine, but it would help if they said why.
As a fellow author I completely agree that it would be nice to get feedback. Given how many really good articles are rated lower than expected with nothing but positive comments in the forum, I sometimes think some people just don't like the topic and aren't necessarily rating the article.
Take care,
Solomon...
SQL# — https://SQLsharp.com/ ( SQLCLR library ofover 340 Functions and Procedures)
Sql Quantum Lift — https://SqlQuantumLift.com/ ( company )
Sql Quantum Leap — https://SqlQuantumLeap.com/ ( blog )
Info sites — Collations • Module Signing • SQLCLR
Viewing 15 posts - 1 through 15 (of 16 total)
You must be logged in to reply to this topic. Login to reply