March 10, 2010 at 7:04 am
I am having an issue with a Scalar Valued Function and using it in a where clause. Let me show you some code
This returns data as expcected
select *
from Loss_Mit.DM_ER_Portal_Usage_TBL
Where Loan_Num in ('1100123408','1100226532','1127116374')
order by loan_num
This returns NO data
select *
from Loss_Mit.DM_ER_Portal_Usage_TBL
where Loan_Num in (select dbo.String_Split ('1100123408,1100226532,1127116374'))
order by loan_num;
The output of the function is:
'1100123408','1100226532','1127116374'
Below is the code that makes up the function
ALTER FUNCTION [dbo].[String_Split] (@SSRS_Input nvarchar(2000))
Returns nvarchar(2000)
As
Begin
---Creates Variables for Function
Declare @Value_String nvarchar(2000);
Declare @Output nvarchar(2000);
---Manipulates string input
Set @Value_String = REPLACE(@SSRS_Input, ',',''',''');
Set @Output = '''' + @Value_String + '''';
---Returns Output of Function
Return @Output
End;
Is there something about Scalar-valued functions that I do not understand? I have also tried this as a Table-Valued function with the same result.
March 10, 2010 at 7:28 am
your problem is your function; to do a split you must return a TABLE, not a nvarchar string.
try this instead:
select *
from Loss_Mit.DM_ER_Portal_Usage_TBL
where Loan_Num in (select Item from dbo.DelimitedSplit ('1100123408,1100226532,1127116374',','))
order by loan_num;
here's the function, from a Jeff Moden post, reformatted slightly the way i prefer:
CREATE function [dbo].[DelimitedSplit] (
@pString varchar(max),
@pDelimiter char(1)
)
returns table
as
return
with
a1 as (select 1 as N union all select 1 union all
select 1 union all select 1 union all
select 1 union all select 1 union all
select 1 union all select 1 union all
select 1 union all select 1),
a2 as (select 1 as N from a1 as a cross join a1 as b),
a3 as (select 1 as N from a2 as a cross join a2 as b),
a4 as (select 1 as N from a3 as a cross join a2 as b),
Tally as (select top (len(@pString)) row_number() over (order by N) as N from a4),
ItemSplit(ItemOrder,Item) as
(
SELECT
N,
SUBSTRING(@pDelimiter + @pString + @pDelimiter,N + 1,CHARINDEX(@pDelimiter,@pDelimiter + @pString + @pDelimiter,N + 1) - N - 1)
FROM Tally
WHERE N < LEN(@pDelimiter + @pString + @pDelimiter)
AND SUBSTRING(@pDelimiter + @pString + @pDelimiter,N,1) = @pDelimiter --Notice how we find the delimiter
)
select
row_number() over (order by ItemOrder) as ItemID,
Item
from
ItemSplit
Lowell
March 10, 2010 at 7:46 am
That is perfect and it works great, but I don't understand it. Can you give me a brief walk through of the code and or add comments to it so that I understand the body of the function?
Thanks for your time on this, I hate using code that I don't understand.
Jim
March 10, 2010 at 7:59 am
Jeff Moden is the resident king of the Tally Table, and has contributed a number of articles on it here on SSC; you can find them in the search for some incredible detail.
There's really two pieces in this function, create a tally table of a bunch of consecutive values, and then splitting the string into a table by joining against the consecutive value table.
You need a table of 1 to x values so you can split a string with good performance.
The Tally table is the result of an inline table where we selected 1 union 1 etc.... till we had 8 values, then we repetitively cross join the same table with a bunch of aliases a bunch of times till we have a crazy amount of values; 2^8 power i think.
Then we use the row_number function just to produce 1 to 2^8 values in order.
that Tally table is finally used to split the passed in string based on the delimiter (comma was used as the passed in delimiter in this case)
This takes advantage of the way set based operations work in SQL server; the "hard" row-by-agonizing-row (RBAR) way would be to find a value before a comma, and insert it into a table, repeating until you ran out of commas. if the string had 8K commas, that style would stay in the loop for 8K cycles, where using the Tally/set based approach does it in one operation.
sweet!
If i muddied the watters, ignore me and search for the Tally articles.
Lowell
March 10, 2010 at 8:09 am
Muddy...might as well be Oil....Of to the search engine I go. Thanks for the help on this though it was a HUGE help.
Jim
March 10, 2010 at 6:45 pm
James A Dionne (3/10/2010)
Muddy...might as well be Oil....Of to the search engine I go. Thanks for the help on this though it was a HUGE help.Jim
Heh... understood. So, let's peel just on potato at a time. See the following article for how a Tally table can be used to replace certain loops. It also talks specifically about how to split parameters using a Tally table. Then, we'll talk about how the cascading CTE's that Lowell posted replace the Tally table.
http://www.sqlservercentral.com/articles/T-SQL/62867/
--Jeff Moden
Change is inevitable... Change for the better is not.
March 10, 2010 at 8:25 pm
A couple of points about string-splitting:
1. It is normally best to avoid storing the data this way in the first place.
2. The fastest solutions use a .NET implementation using SQL Server's CLR hosting feature
Performance tests: http://florianreischl.blogspot.com/2009/09/high-performance-string-split-functions.html
Ready-to-run code: http://sqlblog.com/blogs/adam_machanic/archive/2009/04/28/sqlclr-string-splitting-part-2-even-faster-even-more-scalable.aspx
Paul
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
March 10, 2010 at 9:07 pm
True enough. But if you can't or won't use a CLR solution, one of the myriad Tally table solutions will do the job very quickly as well especially in the VARCHAR(8000) world.
VARCHAR(MAX) solutions tend to slow down with Tally tables because (to over simplify) things really don't like doing joins with VARCHAR(MAX). This is where a CLR really shines.
--Jeff Moden
Change is inevitable... Change for the better is not.
March 10, 2010 at 11:18 pm
Jeff Moden (3/10/2010)
True enough. But if you can't or won't use a CLR solution, one of the myriad Tally table solutions will do the job very quickly as well especially in the VARCHAR(8000) world.
True, but they do require a Tally table. If you can't or won't use a Tally table...etc 😛
I have little time for those that will not enable CLR integration on general principle. There are many circumstances where a CLR implementation is the natural solution, so limiting oneself to intrerpreted T-SQL seems dumb.
Jeff Moden (3/10/2010)
VARCHAR(MAX) solutions tend to slow down with Tally tables because (to over simplify) things really don't like doing joins with VARCHAR(MAX). This is where a CLR really shines.
Not really. The test results show that CLR outperforms the Tally in every case, regardless of VARCHAR, NVARCHAR, MAX or not - except on very small strings (10 characters in Flo's tests).
The primary use case for string splitting does not seem to me to involve ten-character strings! 😀
Finally, very little is made by the T-SQL crew of the comparative CPU and memory use of both methods. The CLR solution is sufficiently easier on CPU to overwhelm the overhead of passing each data item to the hosted process and back for each row. This is primarily because .NET code is pre-compiled and highly optimized to native (machine) code.
Memory use for the CLR solution is absolutely minimal, of the order of the byte size of a single data item. T-SQL solutions use orders-of-magnitude more memory (from the Buffer Pool) to hold the data, process the joins and perform any sorts or hashing operations that might be required. Just because Buffer Pool memory appears inexhaustible and 'for-free' doesn't make it so.
Paul
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
March 10, 2010 at 11:43 pm
Oh I agree that a CLR would be the thing to do here. No question about that. It's just that not all Developers (or even DBA's) have a say so as to whether or not SQLCLR can be enabled or not. In the case where there is no choice because of some ridiculous edict by management (or the DBA), then the Developer has to cough up a T-SQL solution and get the job done.
--Jeff Moden
Change is inevitable... Change for the better is not.
March 10, 2010 at 11:59 pm
Jeff Moden (3/10/2010)
Oh I agree that a CLR would be the thing to do here. No question about that. It's just that not all Developers (or even DBA's) have a say so as to whether or not SQLCLR can be enabled or not. In the case where there is no choice because of some ridiculous edict by management (or the DBA), then the Developer has to cough up a T-SQL solution and get the job done.
Agreed 🙂
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply