June 21, 2005 at 6:26 am
I'm assuming I have to write something to do this, but I'm looking for a function given the characters to search for and the number of occurances to return the position.
For instance a comma separated string, I'd like ot return the position of the 15th comma ","
RB,PO ,PO ,1,2005-06-20 00:00:00,3,IV,1,0,1,0,1,1,0,0, 115307, 115307,1,17,,,,,,,,,P,,,,,,0,1
55 in the example above.
Then I can parse the data from the string I want. For instance using substring and the position of the 15th + 1 and 16th -1 occurance of the comma I can extract " 115307"
ANyone know of a built in function to do this? I can use CHARINDEX to get me to this position, but I think the function I suggest is a better way of doing it.
Thanks
June 21, 2005 at 6:37 am
Have you already looked at the script section?
--
Frank Kalis
Microsoft SQL Server MVP
Webmaster: http://www.insidesql.org/blogs
My blog: http://www.insidesql.org/blogs/frankkalis/[/url]
June 21, 2005 at 6:52 am
This is what Frank is talking about :
CREATE FUNCTION [dbo].[Split] (@vcDelimitedString nVarChar(4000),
@vcDelimiternVarChar(100) )
/**************************************************************************
DESCRIPTION: Accepts a delimited string and splits it at the specified
delimiter points. Returns the individual items as a table data
type with the ElementID field as the array index and the Element
field as the data
PARAMETERS:
@vcDelimitedString- The string to be split
@vcDelimiter- String containing the delimiter where
delimited string should be split
RETURNS:
Table data type containing array of strings that were split with
the delimiters removed from the source string
USAGE:
SELECT ElementID, Element FROM Split('11111,22222,3333', ',') ORDER BY ElementID
AUTHOR:Karen Gayda
DATE: 05/31/2001
MODIFICATION HISTORY:
WHODATEDESCRIPTION
----------------------------------------------------------------
***************************************************************************/
RETURNS @tblArray TABLE
(
ElementIDsmallintIDENTITY(1,1), --Array index
ElementnVarChar(1200)--Array element contents
)
AS
BEGIN
DECLARE
@siIndexsmallint,
@siStartsmallint,
@siDelSizesmallint
SET @siDelSize= LEN(@vcDelimiter)
--loop through source string and add elements to destination table array
WHILE LEN(@vcDelimitedString) > 0
BEGIN
SET @siIndex = CHARINDEX(@vcDelimiter, @vcDelimitedString)
IF @siIndex = 0
BEGIN
INSERT INTO @tblArray (Element) VALUES(@vcDelimitedString)
BREAK
END
ELSE
BEGIN
INSERT INTO @tblArray (Element) VALUES(SUBSTRING(@vcDelimitedString, 1,@siIndex - 1))
SET @siStart = @siIndex + @siDelSize
SET @vcDelimitedString = SUBSTRING(@vcDelimitedString, @siStart , LEN(@vcDelimitedString) - @siStart + 1)
END
END
RETURN
END
June 21, 2005 at 8:46 am
Thanks guys,
I actually wrote my own function specific to my needs returning just the position and not the table.
June 21, 2005 at 9:04 am
You can still use this function and specify the rank (where Elementid = 5). Then you can keep the original split function and use it for other tasks.
June 22, 2005 at 9:37 am
>> SQL programmers do not look for functions; <<
Can you tell me, what do you call?
LEN
SUBSTRING
REPLACE
all in your code above
* Noel
June 22, 2005 at 9:40 am
I was trying not to piss him off (I know he's right)... but everytime I post that function (set version) I had to explain it in details and I got tired of it... maybe I'll change my ways from now on.
June 22, 2005 at 10:33 am
Here's another one just for the fun of it :
Select SUBSTRING(@IDS, dtPositions.StartPos, dtPositions.Length) as IDS from
(
Select dtCommas.Commas + 1 as StartPos, MIN(dtCommas2.Commas) - (dtCommas.Commas + 1) as Length from
(Select PkNumber as Commas from dbo.Numbers where PkNumber < LEN(@IDs) and charindex(',', @IDs, PkNumber) = PkNumber) dtCommas
inner join
(Select PkNumber as Commas from dbo.Numbers where PkNumber < LEN(@IDs) and charindex(',', @IDs, PkNumber) = PkNumber) dtCommas2 on dtCommas.Commas < dtCommas2.Commas
GROUP BY dtCommas.Commas
) dtPositions
(slower than the original)
June 22, 2005 at 10:42 am
Just to explain... I thaught that removing all the concatenation part of the work might make this go somewhat faster but the cost of generating the unequal join+group by to calculate the start/end positions is exponential. So it's actually much more slower if you have a full string of 8000 characters.
June 22, 2005 at 10:43 am
[ I apologize for needling you, Remi... You are in no way that belligerent... ]
I wasn't born stupid - I had to study.
June 22, 2005 at 11:14 am
I know.
I thaught for a moment that you were talking about my query then I got the reference to the older post .
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply