February 23, 2009 at 10:07 am
I was asked for an SP that would make life easier for testers. more of a fun project for myself then anything else. The part where I am in a bottle neck is sending a varchar to a timestamp. I first convert it to bigint, then to varchar. I then try to pass it to an SP which expects a datatype of timestamp. Boom I get this message
The name 'EXECUTE DEL_table1 @table1Id = 'BF0B0A3B-B032-4BEB-A448-01E1A1001CD2', @PrimaryKeyTimeStamp = 2198' is not a valid identifier.
Not quite sure how to make the dynamic sql work with this one? any ideas
Create PROCEDURE [dbo].[SuperFastDelete]
@GuidID uniqueidentifier
,@TableName NVarchar(150)
AS
BEGIN
SET NOCOUNT ON
DECLARE @sql NVarchar(4000), @PrimaryTimeStamp Varchar(100), @ParmDefinition NVARCHAR(500)
SET @sql = N'SELECT @TimeStampOut = Cast(Cast(TimeStamp as BigInt) as NVarchar(100)) FROM '+ @TableName + ' WHERE '+@TableName+'ID = '+Char(39)+CONVERT(NVarchar(50),@GuidID)+Char(39)
SET @ParmDefinition = N'@TimeStampOut NVARCHAR(100) OUTPUT'
EXECUTE sp_executesql @sql, @ParmDefinition, @TimeStampOut=@PrimaryTimeStamp OUTPUT
SET @sql = 'EXECUTE DEL_'+@TableName+' @'+@TableName+'Id = '+ CHAR(39)+CONVERT(Varchar(50),@GuidID)+Char(39)+', @PrimaryKeyTimeStamp = '+@PrimaryTimeStamp
EXEC @sql
END
GO
--endregion
February 23, 2009 at 12:00 pm
EXEC (@SQL)
--Jeff Moden
Change is inevitable... Change for the better is not.
February 23, 2009 at 12:03 pm
[font="Verdana"]Could you give us some hint as to what's not working? That way I will have a better idea of what to look for.[/font]
February 23, 2009 at 12:07 pm
Boy that was a tough one! I am such a bonehead.
February 23, 2009 at 12:27 pm
Heh... don't feel bad... it took me half a day to figure out what I was doing wrong the first time I forgot the parenthesis.
--Jeff Moden
Change is inevitable... Change for the better is not.
February 23, 2009 at 12:31 pm
Sometimes the messages lead you astray. I thought it was having trouble with the Big int in an SP. so I did not think I had missing parans. I really appreciate the help. Thanks
February 23, 2009 at 12:36 pm
[font="Verdana"]Oh yeah. I think we've all run into that one. 😀
Try forcing yourself to use sp_executesql rather than exec (). Gives you better options for parameters.
[/font]
February 23, 2009 at 6:08 pm
The only time I use sp_execute, is if the dynamic SQL needs to put a value in a return variable. Since that's normally done only for GUI code, I don't usually have the need to use it and, when I do, I seem to work a way out without putting values into variables... heh... it's RBAR.
--Jeff Moden
Change is inevitable... Change for the better is not.
February 23, 2009 at 6:39 pm
[font="Verdana"]
Books Online
Being able to substitute parameters in sp_executesql offers the following advantages to using the EXECUTE statement to execute a string:- Because the actual text of the Transact-SQL statement in the sp_executesql string does not change between executions, the query optimizer will probably match the Transact-SQL statement in the second execution with the execution plan generated for the first execution. Therefore, SQL Server does not have to compile the second statement.
- The Transact-SQL string is built only one time.
- The parameter is specified in its native format. Casting to Unicode is not required.
So basically, if you are executing dynamic SQL more than once, it's a little more efficient to use sp_executesql.
[/font]
February 23, 2009 at 7:53 pm
Absolutely... it supposedly allows the re-use of execution plans for a multiple "hit" environment such as that which a GUI with lot's of users my provide.
In a batch environment, it will cause 1 recompile much like that in a GUI environment. Of course, waiting 5 or ten minutes will also cause a recompile even for the GUI environment because it doesn't take a very large data change in the underlying tables to cause a recompile. And, for the batch environment, it's probably best to cause a recompile to get the most effecient execution plan.
--Jeff Moden
Change is inevitable... Change for the better is not.
February 24, 2009 at 5:53 am
Different tables can be passed as input to the procedure. So can the execution plan of the procedure be different for different tables?
"Keep Trying"
February 24, 2009 at 7:36 am
The last statement is accurate. since you are passing tables in there will be no optimization. Plus the execute statement is firing off different procs so again no optimization on the dynamic. () is the key to this query.
February 24, 2009 at 10:51 am
Actually - SQL 2005 is smart enough to be able to do query-level recompiles. So - if your procedure does a bunch of things, one of which happens to be done through dynamic SQL, it's possible that the REST of the procedure be compiled and reused, and the specific process can then be recompiled (if it even changes that much so as to invalidate the last run).
----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
February 25, 2009 at 2:20 am
In this case the execution of the procedure is not done through sp_executesql. So will this code always recompile while executing?
"Keep Trying"
February 25, 2009 at 4:55 am
Pretty much. Some folks actually use it to get around "parameter sniffing" problems.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 1 through 15 (of 15 total)
You must be logged in to reply to this topic. Login to reply