Reading dynamic SQL result set in procedure

  • OK, I've got a stored procedure with a query like this:

    select column1 from table where column2 in ( @local_variable )

    where @local_variable contains a comma delimited list of column2 values.  I could not figure out how to do this unless I put the sql statement together into a string and use sp_executesql.  This works great in query analyzer and I can see the result set is as I need.  But now I can't figure out how the stored procedure can access the result set created by the dynamic sql statement. 

    Thanks!

  • IF Object_id('fnSplit_Set') > 0

    DROP FUNCTION dbo.fnSplit_Set

    GO

    IF Object_id('Numbers') > 0

    DROP TABLE dbo.Numbers

    GO

    CREATE TABLE dbo.Numbers (PkNumber int identity(1,1), dude bit null, CONSTRAINT Pk_Number PRIMARY KEY CLUSTERED (PkNumber))

    GO

    INSERT INTO dbo.Numbers (dude)

    SELECT NULL FROM (SELECT TOP 100 NULL AS A FROM master.dbo.spt_values) dt100 cross join (SELECT TOP 80 null AS A FROM master.dbo.spt_values) dt80

    GO

    ALTER TABLE dbo.Numbers

    DROP COLUMN dude

    GO

    --Select min(PkNumber) as MinA, max(PkNumber) as MaxA from dbo.Numbers

    --1, 8000

    GO

    CREATE FUNCTION [dbo].[fnSplit_Set] (@IDS as varchar(8000), @vcDelimiter varchar(3))

    RETURNS TABLE

    WITH SCHEMABINDING

    AS

    Return

    Select dtSplitted.EachID, dtSplitted.Rank from (

    SELECT SUBSTRING(@vcDelimiter + @IDs + @vcDelimiter, N.PkNumber + len(@vcDelimiter),

    CHARINDEX(@vcDelimiter, @vcDelimiter + @IDs + @vcDelimiter, N.PkNumber + len(@vcDelimiter)) - N.PkNumber - len(@vcDelimiter)) as EachID

    , (LEN(SUBSTRING(@IDs, 1, N.PkNumber)) - LEN (REPLACE (SUBSTRING(@IDs, 1, N.PkNumber), ',', ''))) + 1 AS Rank

    FROM dbo.Numbers N

    WHERE SUBSTRING(@vcDelimiter + @IDs + @vcDelimiter, N.PkNumber, len(@vcDelimiter)) = @vcDelimiter

    AND PkNumber 0

    GO

    DECLARE @Ids as varchar(8000)

    SET @IDS = ''

    Select @IDS = @IDS + CAST(id as varchar(10)) + ',' from (Select TOP 10 id from dbo.SysObjects order by NEWID()) dtIds

    SET @IDS = left(@IDS, ABS(len(@IDS) - 1))

    PRINT @IDS

    Select O.id, dtIDS.EachId, O.name, dtIDS.Rank from dbo.SysObjects O inner join (Select CAST(EachID as INT) AS EachID, Rank from dbo.fnSplit_Set (@IDS, ',')) dtIDS on O.id = dtIDS.EachID order by O.Name

    Select O.id, dtIDS.EachId, O.name, dtIDS.Rank from dbo.SysObjects O inner join (Select CAST(EachID as INT) AS EachID, Rank from dbo.fnSplit_Set (@IDS, ',')) dtIDS on O.id = dtIDS.EachID order by dtIDS.Rank

    --Dont't ever forget to cast the eachid column to the same datatype as in the join to avoid table scans.

  • Oh, Remi is back from vacation

    Now that I have spent some time figuring it out , I might as well post it here. The above code creates a function fnSplit_Set that splits a string on a delimiter. I order for this function to be set based, and thus faster(I think Remi hates loops ), he first creates a table Numbers containing every number from 1 to 8000 that is used in this function. All you have to do is create the table and the function once, and then change your code to something like

    select column1 from table where column2 in ( select EachID from dbo.fnSplit_Set(@local_variable, ','))

     

  • Wasn't that obvious ??

    I actually hate loops on sql server... and so should everyone else .

  • No, it wasn't obvious to me

    Now that I have been around here for a while I understand why loops should be avoided - it's (mainly) a performance issue. But sometimes you get more readable code when you use loops. I think that's the case with your function above

  • You're right about that. However fnSplit_Set clearly tells what the function is for... and select * from dbo.fnSplit_Set() will make it crytal clear. This is a situation where it might not be all that bad if you don't get it right away .

  • Thanks mucho for the help. 

    Is there really no way to process the result set from a dynamic select statement in a stored procedure which was my original question?  That seems like something that should be pretty basic.

    I ended up doing the following which, to me, seems a bit easier to understand and does not use a permanent 'number' table.  I'm not sure if it would perform as well as your approach but there will only be a dozen or so individual keys in the CSV string.

     

    Thanks Again!!!

    ---------------------------------------------

    Create FUNCTION [dbo].[fnSplit_CSV] (@keystring varchar(1000))

    RETURNS @keylist TABLE (each varchar(10))

    AS

    begin

    declare @startIndex int;set @startIndex = 1

    declare @commaIndex int;set @commaIndex = 0

    while @commaIndex < len(@keystring)

     begin

      set @commaIndex = charindex(',',@keystring,@startIndex)

      if @commaIndex = 0 set @commaIndex = len(@keystring) + 1

      insert @keylist select substring(@keystring,@startIndex,(@commaIndex - @startIndex))

      set @startIndex = @commaIndex + 1

     end

    return

    end

    SELECT * FROM fnSplit_CSV('123,345,6666,777')

  • This is the best reference for this problem. As per my personel tests I find the set solution as much as 10 times faster to parse the string compared to that one (specially on larger strings).

    Arrays and Lists in SQL Server

    The fact of the matter is that you should always find a set based approach when you can in sql server. Don't let the apparent complexeness of this function stop you from using it. It is faster and more flexible. The numbers table can be pinned in memory for even faster access (if you can sapre 32K or ram ).

  • The only way to process the result set from a dynamic select statement that I know of, is to create a temp table, and then execute

    insert into #temptable select (your select statement)

    dynamically. Then you have access to the contents of #temptable after your exec() statement.

    In your case it is probably faster to use a function like the one you have created above. Remi's suggestion is even faster, especially with many commas, but your solution is more readable and easier to maintain. If it is a matter of performing a search based on a user-input comma separated string, I am sure the performance difference is close to non-measurable. 

  • I reran your code to see which one is fastest. As expected the set based approach is faster, but I also found out that your function doesn't return all the values in the list... you should really check it out.

  • I can only see that it is working . Could you post an example where it is not working?

    How do you compare the methods? Is your method faster on strings with no commas? One comma? Two commas? I am sure your method is much faster on strings with 100 commas, but I think that it could be interesting to compare on strings with few commas.

  • Found the problem.. was sending a string of 1060 characters. Might want to bump it up to 8K if you don't want that problem to come up later on in production. I know you might not think you need more than that at the moment, but you never know what the users will input, or where else that function may server and blow up the app.

    BTW < 10 commas is still pretty close. But I'd go with the set based approach anytime because IT'S SET BASED. And performance is always an issue to take seriously. Every little bit counts down the road.

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply