June 25, 2010 at 12:23 pm
How to create 3 seperate rows for comma seperate values
For example -
DECLARE @T VARCHAR(10)
SET @T = '1,2,3'
Result needed
SELECT * FROM @T
Output
1
2
3
June 25, 2010 at 12:58 pm
First, you need a delimited split function:
IF OBJECT_ID('dbo.DelimitedSplit') IS NOT NULL DROP FUNCTION dbo.DelimitedSplit
GO
CREATE FUNCTION [dbo].[DelimitedSplit] (
@list varchar(max),
@Delimiter char(1)
)
RETURNS TABLE
AS
RETURN
-- first, need to break down into separate items. See Jeff Moden's article:
-- The "Numbers" or "Tally" Table: What it is and how it replaces a loop.
-- at http://www.sqlservercentral.com/articles/T-SQL/62867/
-- for how a tally table can split strings apart.
-- You really should build your own tally table for the best performance.
WITH
-- if you have your own tally table, omit the following CTEs.
Tens (N) AS (SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 ),
Thousands (N) AS (SELECT 1 FROM Tens t1 CROSS JOIN Tens t2 CROSS JOIN Tens t3),
Millions (N) AS (SELECT 1 FROM Thousands t1 CROSS JOIN Thousands t2),
Tally (N) AS (SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM Millions),
-- if you have your own tally table, omit the preceding CTEs
ItemSplit (ItemOrder, Item) AS (
SELECT N,
RTRIM(LTRIM(SUBSTRING(@Delimiter + @list + @Delimiter,N+1,
CHARINDEX(@Delimiter,@Delimiter + @list + @Delimiter,N+1)-N-1)))
-- if you have your own tally table, schema qualify it in the following line
FROM Tally
WHERE N < LEN(@Delimiter + @list + @Delimiter)
AND SUBSTRING(@Delimiter + @list + @Delimiter,N,1) = @Delimiter
)
SELECT ItemID = ROW_NUMBER() OVER (ORDER BY ItemOrder),
Item
FROM ItemSplit
GO
And here's an example of how to use it:
DECLARE @test-2 TABLE (
ID int IDENTITY,
Col1 varchar(max)
)
-- make some data with two delimiters.
-- # is the major delimiter,
-- , is the minor delimitor.
INSERT INTO @test-2
SELECT 'BOB,Joe,Fred,John' UNION ALL
SELECT 'Jim,Billy,Greg,Laura' UNION ALL
SELECT 'Tim,Clinton,Sarah,Amanda' UNION ALL
SELECT 'Sam,Sandy'
SELECT OriginalRow = t1.ID,
FirstSplitID = ds.ItemID,
ds2.Item
FROM @test-2 t1
CROSS APPLY dbo.DelimitedSplit(t1.Col1, ',') ds -- use the major delimiter first
ORDER BY OriginalRow, FirstSplitID
Wayne
Microsoft Certified Master: SQL Server 2008
Author - SQL Server T-SQL Recipes
June 25, 2010 at 2:03 pm
Run this...
CREATE FUNCTION fn_Split
(
@String nvarchar(max)
,@Del char(1)
)
RETURNS @ValueTable TABLE
(
Value INT
)
AS
BEGIN
DECLARE @Pos int
,@NextPos int
,@Field int
set @Pos = 0
set @String = @String + ','
while len(@string) > 1
BEGIN
set @NextPos = (select CHARINDEX(',',@String,@pos))
if @NextPos > 0
BEGIN
set @Field = LEFT(@String,@Nextpos-1)
INSERT INTO @Valuetable (value) select ltrim(rtrim(@field))
set @String = RIGHT(@String,len(@string) - @nextpos)
if len(@string) = 1
BREAK;
end
end
RETURN
end
-- TO TEST THIS CODE RUN BELOW STATEMENT.
--You can pass any deliminator like :,;"/|\ as 2nd parameter for function. In below example I've passed comma....
select * from dbo.fn_Split ('1,2,3,4,5,6,',',')
FO
June 28, 2010 at 12:08 am
Here is the split function with a single query doing all the splitting. It uses a recursive common table expression to split the string:
create function fn_Split
(
@String varchar(max)
,@Delimiter char(1)
)
returns table
as
return
with cte_Split as
(
select
IsLastValue = case when charindex(@Delimiter, @String) = 0 then 1 else 0 end,
StringValue =case
when charindex(@Delimiter, @String) = 0 then ''
else substring(@String, charindex(@Delimiter, @String) + 1, datalength(@String))
end,
Item =case
when charindex(@Delimiter, @String) = 0 then @String
else left(@String, charindex(@Delimiter, @String) - 1)
end
union all
select
IsLastValue = case when charindex(@Delimiter, StringValue) = 0 then 1 else 0 end,
StringValue =case
when charindex(@Delimiter, StringValue) = 0 then StringValue
else substring(StringValue, charindex(@Delimiter, StringValue) + 1, datalength(StringValue))
end,
Item =case
when charindex(@Delimiter, StringValue) = 0 then StringValue
else left(StringValue, charindex(@Delimiter, StringValue) - 1)
end
from
cte_Split
where
IsLastValue = 0
)
select Item from cte_Split;
June 28, 2010 at 1:01 pm
Wayne's solution above is by far the most efficient of the 3, especially if you already have a tally table set up. (Not sure why inferior suggestions were posted after it.)
And it can't be said enough: doing a WHILE....LOOP results in poor performance - please don't do it.
June 28, 2010 at 2:22 pm
@Wayne: any specific reason not to use the VARCHAR(8000) version (aka dbo.DelimitedSplit8K) in this case?
And I second bteraberry: The other approaches are known as listed under "not recommended". 😉
June 28, 2010 at 3:44 pm
lmu92 (6/28/2010)
@Wayne: any specific reason not to use the VARCHAR(8000) version (aka dbo.DelimitedSplit8K) in this case?
Lutz, I'm just giving you a chance to show a better approach. 😀
Seriously, I prefer to err on the side of not having errors. If you know for a fact that you won't be exceeding 8000 characters in the input string, then this would be the preferable method.
And I second bteraberry: The other approaches are known as listed under "not recommended". 😉
I was taken back by seeing a while loop and a recursive cte as solutions.
Now if the DelimitedSplit8k had been posted, that would have been a worthwhile addition.
Wayne
Microsoft Certified Master: SQL Server 2008
Author - SQL Server T-SQL Recipes
June 28, 2010 at 3:54 pm
WayneS (6/28/2010)
lmu92 (6/28/2010)
@Wayne: any specific reason not to use the VARCHAR(8000) version (aka dbo.DelimitedSplit8K) in this case?Lutz, I'm just giving you a chance to show a better approach. 😀
Seriously, I prefer to err on the side of not having errors. If you know for a fact that you won't be exceeding 8000 characters in the input string, then this would be the preferable method.
And I second bteraberry: The other approaches are known as listed under "not recommended". 😉
I was taken back by seeing a while loop and a recursive cte as solutions.
Now if the DelimitedSplit8k had been posted, that would have been a worthwhile addition.
Ok, kinda funny I had to go back to this post within just a few hours...
The first person who "forced" me to do it actually created the function and the second person (guess who...) posted it... 😀
Anyway, here it it is:
CREATE FUNCTION dbo.DelimitedSplit8K
/***************************************************************************************************
Purpose:
Split a given string at a given delimiter and return a list of the split elements (items).
Usage Example:
SELECT *
FROM dbo.DelimitedSplit8K(@StringToSplit, @Delimiter)
Returns:
iTVF containing the following:
ItemNumber = Element position of Item as a BIGINT (not converted to INT to eliminate a CAST)
Item = Element value as a VARCHAR(8000)
Notes:
1. Optimized for VARCHAR(8000) or less.
2. Optimized for single character delimiter.
3. Optimized for use with CROSS APPLY.
4. Does not "trim" elements just in case leading or trailing blanks are intended.
5. cteTally concept originally by Iztek Ben Gan and "decimalized" by Lynn Pettis (and others) for a
bit of extra speed and finally redacted by Jeff Moden for a different slant on readability and
compactness. Extra speed realized (cuts out 9 SELECT/UNION ALL's) with UNPIVOT thanks to
Gianluca Sartori.
6. If you don't know how a Tally table can be used to replace loops, please see the following...
http://www.sqlservercentral.com/articles/T-SQL/62867/
Revision History:
Rev 00 - 20 Jan 2010 - Jeff Moden
- Base 10 redaction for CTE.
Rev 01 - 08 Mar 2010 - Jeff Moden
- Changed UNION ALL to UNPIVOT for bit of extra speed.
Rev 02 - 13 Mar 2010 - Jeff Moden
- Removed one additional concatenation and one subtraction from the SUBSTRING in the
SELECT List.
***************************************************************************************************/
--===== Define I/O parameters
(
@pString VARCHAR(8000),
@pDelimiter CHAR(1)
)
RETURNS TABLE
AS
RETURN
WITH --"Inline" CTE Driven "Tally Table" produces values up to 10K... enough to cover VARCHAR(8000).
E1(N) AS ( --=== Create Ten 1's very quickly
SELECT N
FROM (SELECT 1 N0, 1 N1, 1 N2, 1 N3, 1 N4, 1 N5, 1 N6, 1 N7, 1 N8, 1 N9) AS E0
UNPIVOT (N FOR Nx IN (N0, N1, N2, N3, N4, N5, N6, N7, N8, N9)) AS unpvt
), --10
E2(N) AS (SELECT 1 FROM E1 a, E1 b), --100
E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10,000
cteTally(N) AS (SELECT ROW_NUMBER() OVER (ORDER BY N) FROM E4)
--===== Do the split
SELECT ROW_NUMBER() OVER (ORDER BY N) AS ItemNumber,
SUBSTRING(@pString, N, CHARINDEX(@pDelimiter, @pString + @pDelimiter, N) - N) AS Item
FROM cteTally
WHERE N < LEN(@pString) + 2
AND SUBSTRING(@pDelimiter + @pString, N, 1) = @pDelimiter
;
June 28, 2010 at 5:31 pm
lmu92 (6/28/2010)
Ok, kinda funny I had to go back to this post within just a few hours...The first person who "forced" me to do it actually created the function and the second person (guess who...) posted it... 😀
:-D:-D:-D
Did you see this reply I made to that other post where you had to find it?
Wayne
Microsoft Certified Master: SQL Server 2008
Author - SQL Server T-SQL Recipes
June 28, 2010 at 11:43 pm
Thanks for everyone's contributions. I wasn't the one asking the questions, but I learned a lot from the post.
Wayne's solution above is by far the most efficient of the 3, especially if you already have a tally table set up. (Not sure why inferior suggestions were posted after it.)
It's very simple: I did not know my solution was inferior. So, I set up an assortment of tests to compare the three solutions and I can now confirm that you are correct: the first solution is most efficient.
Also, I have never seen the first solution before. Now I have another tool in my development toolbox.:-)
And I second bteraberry: The other approaches are known as listed under "not recommended".
Where can I find this "not recommended" list?
June 29, 2010 at 12:16 am
gideon.kahl (6/28/2010)
I wasn't the one asking the questions, but I learned a lot from the post.
It always is the case here in SSC, u learn not only by asking but also reading others' post!
And I second bteraberry: The other approaches are known as listed under "not recommended".
Where can I find this "not recommended" list?
There is not pre-defined "not recommended" list as such ; generally a solution that "loops" can be done by using Tally Tables which will beat the other loopers like WHILE / CURSOR by a mile !!
🙂
June 29, 2010 at 10:25 am
WayneS (6/28/2010)
lmu92 (6/28/2010)
Ok, kinda funny I had to go back to this post within just a few hours...The first person who "forced" me to do it actually created the function and the second person (guess who...) posted it... 😀
:-D:-D:-D
Did you see this reply I made to that other post where you had to find it?
I sure did. And I've seen Jeffs latest version. Amazing how much fine tuning is done to deal with a denormalized data format that shouldn't be there in the first place (in some, maybe most cases...) :-D:-):-D Awesome job, Jeff (if you happen to see this post)!!!
June 29, 2010 at 10:36 am
gideon.kahl (6/28/2010)
Thanks for everyone's contributions. I wasn't the one asking the questions, but I learned a lot from the post.Wayne's solution above is by far the most efficient of the 3, especially if you already have a tally table set up. (Not sure why inferior suggestions were posted after it.)
It's very simple: I did not know my solution was inferior. So, I set up an assortment of tests to compare the three solutions and I can now confirm that you are correct: the first solution is most efficient.
Also, I have never seen the first solution before. Now I have another tool in my development toolbox.:-)
And I second bteraberry: The other approaches are known as listed under "not recommended".
Where can I find this "not recommended" list?
I'm sorry, Gideon!! I should have written imaginary "not recommended" list. I wish there would be an "official list" but unfortunately, there usually is an "it depends" clause behind almost each and every recommendation... The best you can do is to start your own list of "do & don't'" filled with solutions you stumbled across and that you consider being useful. And whenever you test new solutions against your current tool set, include a load test e.g. based on a million rows.
June 29, 2010 at 12:01 pm
gideon.kahl (6/28/2010)
Thanks for everyone's contributions. I wasn't the one asking the questions, but I learned a lot from the post.
I've learned the most not from the questions asked, but invariably from the discussion that follows...
It's very simple: I did not know my solution was inferior. So, I set up an assortment of tests to compare the three solutions and I can now confirm that you are correct: the first solution is most efficient.
... and this is one of the ways that I've learned - by trying out the different ways and checking the results for myself.
Out of curiousity, did you use the first solution as it's written, or did you follow the recommendation to build your very own tally table with a clustered index (100% FillFactor) and run the code against that?
Also, I have never seen the first solution before. Now I have another tool in my development toolbox.:-)
Good, this has now helped more than one person. Plus, if you've read the discussion, you should be aware that there is another solution out there that is very similiar to the first solution but is for smaller strings (< 8000 bytes) and it is even more efficient.
Wayne
Microsoft Certified Master: SQL Server 2008
Author - SQL Server T-SQL Recipes
June 29, 2010 at 8:42 pm
lmu92 (6/29/2010)
WayneS (6/28/2010)
lmu92 (6/28/2010)
Ok, kinda funny I had to go back to this post within just a few hours...The first person who "forced" me to do it actually created the function and the second person (guess who...) posted it... 😀
:-D:-D:-D
Did you see this reply I made to that other post where you had to find it?
I sure did. And I've seen Jeffs latest version. Amazing how much fine tuning is done to deal with a denormalized data format that shouldn't be there in the first place (in some, maybe most cases...) :-D:-):-D Awesome job, Jeff (if you happen to see this post)!!!
Thanks for the wonderful feedback and compliment, Lutz. Responses like this sure help keep a man going even on bad days.
And, I agree... it IS amazing how much water that's passed under a bridge where there shouldn't have even been a road. 😀
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 1 through 15 (of 16 total)
You must be logged in to reply to this topic. Login to reply