T-SQL text manipulation and recordset iteration

  • I admit, I'm a newbie at T-SQL but I can write code in C#, VBA etc, so at the moment my processing of CDR (Call Detail recording text stream) is done in Access VBA with SQL as a back end. The problem is it takes a lot of time to process 2000 records, and I believe it will be a lot quicker as a stored procedure. Since I have a handle on all the text processing elements of T-SQL, the question is, what segments of the script are required to:-

    1. Read in the records I want to process into a cursor (not recommended apparently) or recordset

    2. Assign each record to a variable ( along string of text separated by | characters)

    3. Split the text into 45 fields of varying types (character text, integer, date and time)

    4. Insert the created 45 values into an existing table

    Im not looking for someone to write all this, just a basic example using a couple of variables to get me started. The data from a PBX in CDR format is a pain in the *** to work with, we have to convert date formats, time formats, split some strings using MID() and LEN() etc, but I got it all working in Access, but pulling 2000 records accross the LAN, processing each one individually and then writing it back to SQL Server is painfully slow. Each entry has a date/time stamp so I can pull records by date and time. So, any help or suggestions would be greatly appreciated. I'm learning the T-SQL as I go so I might make some progress in the next couple of days, but it is trial and error. I have a working system in VBA but I wuld like to progress, and since most of you are way better at this, you might have a few examples sitting around.

    Rob.

  • Assuming each column in the text is seperated by a pipe (|) character, and that the data is located in a text file, you may want to look at the BULK INSERT command in Books Online.

    http://msdn.microsoft.com/en-us/library/ms188365(v=sql.100).aspx

  • Thanks Lynn. The bulk insert works great AFTER I have processed the data. Here is an example of the CDR raw text data to give you an idea of how crappy it is:-)

    |CALL_END |20 |0 |0 |21 |236 |2 |ISDN |TEL |10.207.60.2 |10.207.0.10 |2 |1 |7152753911 |7152753911 |0 |0 |7260 |7260 |99 |g711Ulaw64k |20 |10.207.0.14 |8284 |IP |GWAPP_NORMAL_CALL_CLEAR |0 |3119 |4974 |16 |-1 |3807383372162012121829@10.207.60.2 | 12:18:29.000 UTC Jun 21 2012 | 12:18:29.000 UTC Jun 21 2012 | 12:20:09.000 UTC Jun 21 2012 |0 |0 |0 |0 |-1 |0 |0 | |0 <013><010> [Time: 12:20:09]

    As you can see, working with voice data can be a pain. Look at the time and date in the UTC section. I have to remove UTC, correct the date format, make sure the text is changes to a number for the integers etc..... Not fun.

    Robert.

  • You could use BULK INSERT to load the data into a staging table where all the columns are declared the appropriate size of character data, then use T-SQL to cleanup the data as it is move to its final destination.

  • The data is IN a table already. I got creative. The Mediant 2000 IP telephony gateway uses syslog to export the data. I used the KIWI syslog server to pump the data directly into an SQL database table so it is a constant stream as opposed to a text file waiting to be processed. Not knowing a great deal about T-SQL is the issue. I figure for a stored procedure I woul need to declare all the variables, select the records by date (as in TODAY for the current recordset), get the first record into a variable, process the contents individually, write the contents to the declared variables, then insert the finished variables into my dbo_CDR table which is currently done via ACCESS front end and then loop through the rest of the records. I searched around looking for an example of how this is done but no luck so far, so I thought I would try asking for some help.

    Robert.

  • rjbirkett (6/29/2012)


    The data is IN a table already. I got creative. The Mediant 2000 IP telephony gateway uses syslog to export the data. I used the KIWI syslog server to pump the data directly into an SQL database table so it is a constant stream as opposed to a text file waiting to be processed. Not knowing a great deal about T-SQL is the issue. I figure for a stored procedure I woul need to declare all the variables, select the records by date (as in TODAY for the current recordset), get the first record into a variable, process the contents individually, write the contents to the declared variables, then insert the finished variables into my dbo_CDR table which is currently done via ACCESS front end and then loop through the rest of the records. I searched around looking for an example of how this is done but no luck so far, so I thought I would try asking for some help.

    Robert.

    You are thinking procedurally. This will result in a solution that is not scalable. You need to start thinking in sets, and a good way to do that is ask what do I want to do to a column.

  • I guess I'm in the wrong forum. The string is in a column. I have to split that srting into the necessary fields/data, and manipulate it so it will go into a table. I know how to do the string manipulation. I don't know how to process all this in T-SQL. Thanks for replying though. Perhaps someone else will have some ideas, or maybe I should try a different forum. As far as I can see this is a procedure, rwad the string, split it into variables, process the contents and write it to a table. Maybe I'm missing your point.

    Robert.

  • So you imported the data as a single column of text instead of spliting it by the column deliminator (|). Each row has the same number of columns, correct? Can you provide me with 3 to 5 records as a series of INSERT INTO statements to a table defined as your current source table? I will show you how to parse the records.

  • I can save you a ton of trouble. Pass your string to this function and it will return a table of all the tokens in order. Then you can select directly from the tokens table into a typed table.

    IF EXISTS (SELECT * FROM dbo.sysobjects WHERE id = object_id(N'dbo.StringParse') AND xtype IN (N'FN', N'IF', N'TF'))

    DROP FUNCTION dbo.StringParse

    GO

    CREATE FUNCTION dbo.StringParse(

    @STR varchar(max),

    @delimiter varchar(20),

    @emptytonull bit)

    returns @tokens TABLE (rownbr int PRIMARY KEY NOT NULL, token varchar(max) NULL)

    AS

    -- written BY Bill Talada

    BEGIN

    /*

    split a string INTO tokens AND RETURN them IN a TABLE.

    empty string returns one NULL row

    a string WITH only a delimiter returns two NULL rows, one before it AND one after it.

    a delimiter at the beginning means a NULL value appears before it.

    two delimiters IN a row implies a NULL value BETWEEN them.

    a delimiter at the END OF string implies a NULL value after it.

    */

    DECLARE

    @delimiterlength int,

    @currptr int,

    @nextptr int,

    @token varchar(max),

    @rownbr int

    SET @delimiterlength = datalength(@delimiter)

    SET @rownbr = 0

    SET @currptr = 1 - @delimiterlength

    SET @nextptr = charindex(@delimiter, @STR, @currptr+@delimiterlength)

    --SELECT @currptr, @nextptr, @delimiterlength

    WHILE @nextptr > 0

    BEGIN

    SET @rownbr = @rownbr + 1

    SET @token = substring(@str, @currptr+@delimiterlength, @nextptr - (@currptr+@delimiterlength))

    IF len(@token) = 0

    BEGIN

    IF @emptytonull=1

    SET @token = NULL

    ELSE

    SET @token = ''

    END

    INSERT INTO @tokens VALUES (@rownbr, @token)

    SET @currptr=@nextptr

    SET @nextptr=charindex(@delimiter, @STR, @currptr+@delimiterlength)

    END

    -- last row

    SET @rownbr = @rownbr + 1

    SET @token = substring(@str, @currptr + @delimiterlength, datalength(@str) - @currptr + @delimiterlength)

    IF len(@token) = 0

    BEGIN

    IF @emptytonull=1

    SET @token = NULL

    ELSE

    SET @token = ''

    END

    INSERT INTO @tokens VALUES (@rownbr, @token)

    RETURN

    END

    go

  • I could also use the column names you want these put into. Initially, I would look at just doing it to a table where all the columns are defined as appropriately sized character columns. I like to do things in steps. Makes it easier to figure out what is going wrong if there are problems.

  • Lynn, I did not import the data, it is written to a table via a database connection from the syslog server, which is a table with 5 columns (since I'm getting confused about colums, rows, tables, one row has five fields in it), of which one has the character string in it for the CDR record, an example of the text was shown above. Only two elements of the table have any meaning, the date/time of the CDR record and the CDR record itself. As I said, I'm a newbie, you are the expert. All I want to do is read in the CDR text, process it and insert it into a table I created that already works insofaras VBA code processing via an ADODB connection from Access. The reason for all this is simple. A CDR package from a vendor starts around $35k for licenses based on the number of PBX systems and phone lines attached, and there is no package that will process data from syslog OR the Mediant 2000 IP gateway as yet.

    The other response with the script looks promising, but I am still trying to figure out if that will help since it just puts the segment between the delimiter in a database, I still have to read that in, process the contents into something meaningful, and write it back out to my table. I might be able to figure that out. I guess this is why CDR applications are so expensive:-)

  • Hopefully this helps a little bit.

    This is the delimited split routine used in the code that follows:

    USE [SandBox]

    GO

    /****** Object: UserDefinedFunction [dbo].[DelimitedSplit8K] Script Date: 06/29/2012 11:53:50 ******/

    IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[DelimitedSplit8K]') AND type in (N'FN', N'IF', N'TF', N'FS', N'FT'))

    DROP FUNCTION [dbo].[DelimitedSplit8K]

    GO

    USE [SandBox]

    GO

    /****** Object: UserDefinedFunction [dbo].[DelimitedSplit8K] Script Date: 06/29/2012 11:53:50 ******/

    SET ANSI_NULLS ON

    GO

    SET QUOTED_IDENTIFIER ON

    GO

    CREATE FUNCTION [dbo].[DelimitedSplit8K]

    /**********************************************************************************************************************

    Purpose:

    Split a given string at a given delimiter and return a list of the split elements (items).

    Notes:

    1. Leading a trailing delimiters are treated as if an empty string element were present.

    2. Consecutive delimiters are treated as if an empty string element were present between them.

    3. Except when spaces are used as a delimiter, all spaces present in each element are preserved.

    Returns:

    iTVF containing the following:

    ItemNumber = Element position of Item as a BIGINT (not converted to INT to eliminate a CAST)

    Item = Element value as a VARCHAR(8000)

    Statistics on this function may be found at the following URL:

    http://www.sqlservercentral.com/Forums/Topic1101315-203-4.aspx

    CROSS APPLY Usage Examples and Tests:

    --=====================================================================================================================

    -- TEST 1:

    -- This tests for various possible conditions in a string using a comma as the delimiter. The expected results are

    -- laid out in the comments

    --=====================================================================================================================

    --===== Conditionally drop the test tables to make reruns easier for testing.

    -- (this is NOT a part of the solution)

    IF OBJECT_ID('tempdb..#JBMTest') IS NOT NULL DROP TABLE #JBMTest

    ;

    --===== Create and populate a test table on the fly (this is NOT a part of the solution).

    -- In the following comments, "b" is a blank and "E" is an element in the left to right order.

    -- Double Quotes are used to encapsulate the output of "Item" so that you can see that all blanks

    -- are preserved no matter where they may appear.

    SELECT *

    INTO #JBMTest

    FROM ( --# & type of Return Row(s)

    SELECT 0, NULL UNION ALL --1 NULL

    SELECT 1, SPACE(0) UNION ALL --1 b (Empty String)

    SELECT 2, SPACE(1) UNION ALL --1 b (1 space)

    SELECT 3, SPACE(5) UNION ALL --1 b (5 spaces)

    SELECT 4, ',' UNION ALL --2 b b (both are empty strings)

    SELECT 5, '55555' UNION ALL --1 E

    SELECT 6, ',55555' UNION ALL --2 b E

    SELECT 7, ',55555,' UNION ALL --3 b E b

    SELECT 8, '55555,' UNION ALL --2 b B

    SELECT 9, '55555,1' UNION ALL --2 E E

    SELECT 10, '1,55555' UNION ALL --2 E E

    SELECT 11, '55555,4444,333,22,1' UNION ALL --5 E E E E E

    SELECT 12, '55555,4444,,333,22,1' UNION ALL --6 E E b E E E

    SELECT 13, ',55555,4444,,333,22,1,' UNION ALL --8 b E E b E E E b

    SELECT 14, ',55555,4444,,,333,22,1,' UNION ALL --9 b E E b b E E E b

    SELECT 15, ' 4444,55555 ' UNION ALL --2 E (w/Leading Space) E (w/Trailing Space)

    SELECT 16, 'This,is,a,test.' --E E E E

    ) d (SomeID, SomeValue)

    ;

    --===== Split the CSV column for the whole table using CROSS APPLY (this is the solution)

    SELECT test.SomeID, test.SomeValue, split.ItemNumber, Item = QUOTENAME(split.Item,'"')

    FROM #JBMTest test

    CROSS APPLY dbo.DelimitedSplit8K(test.SomeValue,',') split

    ;

    --=====================================================================================================================

    -- TEST 2:

    -- This tests for various "alpha" splits and COLLATION using all ASCII characters from 0 to 255 as a delimiter against

    -- a given string. Note that not all of the delimiters will be visible and some will show up as tiny squares because

    -- they are "control" characters. More specifically, this test will show you what happens to various non-accented

    -- letters for your given collation depending on the delimiter you chose.

    --=====================================================================================================================

    WITH

    cteBuildAllCharacters (String,Delimiter) AS

    (

    SELECT TOP 256

    'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789',

    CHAR(ROW_NUMBER() OVER (ORDER BY (SELECT NULL))-1)

    FROM master.sys.all_columns

    )

    SELECT ASCII_Value = ASCII(c.Delimiter), c.Delimiter, split.ItemNumber, Item = QUOTENAME(split.Item,'"')

    FROM cteBuildAllCharacters c

    CROSS APPLY dbo.DelimitedSplit8K(c.String,c.Delimiter) split

    ORDER BY ASCII_Value, split.ItemNumber

    ;

    -----------------------------------------------------------------------------------------------------------------------

    Other Notes:

    1. Optimized for VARCHAR(8000) or less. No testing or error reporting for truncation at 8000 characters is done.

    2. Optimized for single character delimiter. Multi-character delimiters should be resolvedexternally from this

    function.

    3. Optimized for use with CROSS APPLY.

    4. Does not "trim" elements just in case leading or trailing blanks are intended.

    5. If you don't know how a Tally table can be used to replace loops, please see the following...

    http://www.sqlservercentral.com/articles/T-SQL/62867/

    6. Changing this function to use NVARCHAR(MAX) will cause it to run twice as slow. It's just the nature of

    VARCHAR(MAX) whether it fits in-row or not.

    7. Multi-machine testing for the method of using UNPIVOT instead of 10 SELECT/UNION ALLs shows that the UNPIVOT method

    is quite machine dependent and can slow things down quite a bit.

    -----------------------------------------------------------------------------------------------------------------------

    Credits:

    This code is the product of many people's efforts including but not limited to the following:

    cteTally concept originally by Iztek Ben Gan and "decimalized" by Lynn Pettis (and others) for a bit of extra speed

    and finally redacted by Jeff Moden for a different slant on readability and compactness. Hat's off to Paul White for

    his simple explanations of CROSS APPLY and for his detailed testing efforts. Last but not least, thanks to

    Ron "BitBucket" McCullough and Wayne Sheffield for their extreme performance testing across multiple machines and

    versions of SQL Server. The latest improvement brought an additional 15-20% improvement over Rev 05. Special thanks

    to "Nadrek" and "peter-757102" (aka Peter de Heer) for bringing such improvements to light. Nadrek's original

    improvement brought about a 10% performance gain and Peter followed that up with the content of Rev 07.

    I also thank whoever wrote the first article I ever saw on "numbers tables" which is located at the following URL

    and to Adam Machanic for leading me to it many years ago.

    http://sqlserver2000.databases.aspfaq.com/why-should-i-consider-using-an-auxiliary-numbers-table.html

    -----------------------------------------------------------------------------------------------------------------------

    Revision History:

    Rev 00 - 20 Jan 2010 - Concept for inline cteTally: Lynn Pettis and others.

    Redaction/Implementation: Jeff Moden

    - Base 10 redaction and reduction for CTE. (Total rewrite)

    Rev 01 - 13 Mar 2010 - Jeff Moden

    - Removed one additional concatenation and one subtraction from the SUBSTRING in the SELECT List for that tiny

    bit of extra speed.

    Rev 02 - 14 Apr 2010 - Jeff Moden

    - No code changes. Added CROSS APPLY usage example to the header, some additional credits, and extra

    documentation.

    Rev 03 - 18 Apr 2010 - Jeff Moden

    - No code changes. Added notes 7, 8, and 9 about certain "optimizations" that don't actually work for this

    type of function.

    Rev 04 - 29 Jun 2010 - Jeff Moden

    - Added WITH SCHEMABINDING thanks to a note by Paul White. This prevents an unnecessary "Table Spool" when the

    function is used in an UPDATE statement even though the function makes no external references.

    Rev 05 - 02 Apr 2011 - Jeff Moden

    - Rewritten for extreme performance improvement especially for larger strings approaching the 8K boundary and

    for strings that have wider elements. The redaction of this code involved removing ALL concatenation of

    delimiters, optimization of the maximum "N" value by using TOP instead of including it in the WHERE clause,

    and the reduction of all previous calculations (thanks to the switch to a "zero based" cteTally) to just one

    instance of one add and one instance of a subtract. The length calculation for the final element (not

    followed by a delimiter) in the string to be split has been greatly simplified by using the ISNULL/NULLIF

    combination to determine when the CHARINDEX returned a 0 which indicates there are no more delimiters to be

    had or to start with. Depending on the width of the elements, this code is between 4 and 8 times faster on a

    single CPU box than the original code especially near the 8K boundary.

    - Modified comments to include more sanity checks on the usage example, etc.

    - Removed "other" notes 8 and 9 as they were no longer applicable.

    Rev 06 - 12 Apr 2011 - Jeff Moden

    - Based on a suggestion by Ron "Bitbucket" McCullough, additional test rows were added to the sample code and

    the code was changed to encapsulate the output in pipes so that spaces and empty strings could be perceived

    in the output. The first "Notes" section was added. Finally, an extra test was added to the comments above.

    Rev 07 - 06 May 2011 - Peter de Heer, a further 15-20% performance enhancement has been discovered and incorporated

    into this code which also eliminated the need for a "zero" position in the cteTally table.

    **********************************************************************************************************************/

    --===== Define I/O parameters

    (@pString VARCHAR(8000), @pDelimiter CHAR(1))

    RETURNS TABLE WITH SCHEMABINDING AS

    RETURN

    --===== "Inline" CTE Driven "Tally Table" produces values from 0 up to 10,000...

    -- enough to cover NVARCHAR(4000)

    WITH E1(N) AS (

    SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL

    SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL

    SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1

    ), --10E+1 or 10 rows

    E2(N) AS (SELECT 1 FROM E1 a, E1 b), --10E+2 or 100 rows

    E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10E+4 or 10,000 rows max

    cteTally(N) AS (--==== This provides the "base" CTE and limits the number of rows right up front

    -- for both a performance gain and prevention of accidental "overruns"

    SELECT TOP (ISNULL(DATALENGTH(@pString),0)) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E4

    ),

    cteStart(N1) AS (--==== This returns N+1 (starting position of each "element" just once for each delimiter)

    SELECT 1 UNION ALL

    SELECT t.N+1 FROM cteTally t WHERE SUBSTRING(@pString,t.N,1) = @pDelimiter

    ),

    cteLen(N1,L1) AS(--==== Return start and length (for use in substring)

    SELECT s.N1,

    ISNULL(NULLIF(CHARINDEX(@pDelimiter,@pString,s.N1),0)-s.N1,8000)

    FROM cteStart s

    )

    --===== Do the actual split. The ISNULL/NULLIF combo handles the length for the final element when no delimiter is found.

    SELECT ItemNumber = ROW_NUMBER() OVER(ORDER BY l.N1),

    Item = SUBSTRING(@pString, l.N1, l.L1)

    FROM cteLen l

    ;

    GO

    This code splits the data. I used the one record you provided earlier as my sample data and put that into a CTE (Common Table Expression). Where I use this, SampleData, you can remove the CTE code and substitute your actual table in the select at the end. If you only want to do a subset, modify the CTE to select the data you want to work with from the main table.

    with SampleData as (

    select * from (VALUES (

    '|CALL_END |20 |0 |0 |21 |236 |2 |ISDN |TEL |10.207.60.2 |10.207.0.10 |2 |1 |7152753911 |7152753911 |0 |0 |7260 |7260 |99 |g711Ulaw64k |20 |10.207.0.14 |8284 |IP |GWAPP_NORMAL_CALL_CLEAR |0 |3119 |4974 |16 |-1 |3807383372162012121829@10.207.60.2 | 12:18:29.000 UTC Jun 21 2012 | 12:18:29.000 UTC Jun 21 2012 | 12:20:09.000 UTC Jun 21 2012 |0 |0 |0 |0 |-1 |0 |0 | |0 <013><010> [Time: 12:20:09]'))dt(SampleRec))

    select

    sd.SampleRec,

    max(case ds.ItemNumber

    when 1 then ds.Item else null end),

    max(case ds.ItemNumber

    when 2 then ds.Item else null end),

    max(case ds.ItemNumber

    when 3 then ds.Item else null end),

    max(case ds.ItemNumber

    when 4 then ds.Item else null end),

    max(case ds.ItemNumber

    when 5 then ds.Item else null end),

    max(case ds.ItemNumber

    when 6 then ds.Item else null end),

    max(case ds.ItemNumber

    when 7 then ds.Item else null end),

    max(case ds.ItemNumber

    when 8 then ds.Item else null end),

    max(case ds.ItemNumber

    when 9 then ds.Item else null end),

    max(case ds.ItemNumber

    when 10 then ds.Item else null end),

    max(case ds.ItemNumber

    when 11 then ds.Item else null end),

    max(case ds.ItemNumber

    when 12 then ds.Item else null end),

    max(case ds.ItemNumber

    when 13 then ds.Item else null end),

    max(case ds.ItemNumber

    when 14 then ds.Item else null end),

    max(case ds.ItemNumber

    when 15 then ds.Item else null end),

    max(case ds.ItemNumber

    when 16 then ds.Item else null end),

    max(case ds.ItemNumber

    when 17 then ds.Item else null end),

    max(case ds.ItemNumber

    when 18 then ds.Item else null end),

    max(case ds.ItemNumber

    when 19 then ds.Item else null end),

    max(case ds.ItemNumber

    when 20 then ds.Item else null end),

    max(case ds.ItemNumber

    when 21 then ds.Item else null end),

    max(case ds.ItemNumber

    when 22 then ds.Item else null end),

    max(case ds.ItemNumber

    when 23 then ds.Item else null end),

    max(case ds.ItemNumber

    when 24 then ds.Item else null end),

    max(case ds.ItemNumber

    when 25 then ds.Item else null end),

    max(case ds.ItemNumber

    when 26 then ds.Item else null end),

    max(case ds.ItemNumber

    when 27 then ds.Item else null end),

    max(case ds.ItemNumber

    when 28 then ds.Item else null end),

    max(case ds.ItemNumber

    when 29 then ds.Item else null end),

    max(case ds.ItemNumber

    when 30 then ds.Item else null end),

    max(case ds.ItemNumber

    when 31 then ds.Item else null end),

    max(case ds.ItemNumber

    when 32 then ds.Item else null end),

    max(case ds.ItemNumber

    when 33 then ds.Item else null end),

    max(case ds.ItemNumber

    when 34 then ds.Item else null end),

    max(case ds.ItemNumber

    when 35 then ds.Item else null end),

    max(case ds.ItemNumber

    when 36 then ds.Item else null end),

    max(case ds.ItemNumber

    when 37 then ds.Item else null end),

    max(case ds.ItemNumber

    when 38 then ds.Item else null end),

    max(case ds.ItemNumber

    when 39 then ds.Item else null end),

    max(case ds.ItemNumber

    when 40 then ds.Item else null end),

    max(case ds.ItemNumber

    when 41 then ds.Item else null end),

    max(case ds.ItemNumber

    when 42 then ds.Item else null end),

    max(case ds.ItemNumber

    when 43 then ds.Item else null end),

    max(case ds.ItemNumber

    when 44 then ds.Item else null end),

    max(case ds.ItemNumber

    when 45 then ds.Item else null end)

    from

    SampleData sd

    cross apply dbo.DelimitedSplit8K(SampleRec,'|') ds

    group by

    sd.SampleRec;

  • rjbirkett (6/29/2012)


    Lynn, I did not import the data, it is written to a table via a database connection from the syslog server, which is a table with 5 columns (since I'm getting confused about colums, rows, tables, one row has five fields in it), of which one has the character string in it for the CDR record, an example of the text was shown above. Only two elements of the table have any meaning, the date/time of the CDR record and the CDR record itself. As I said, I'm a newbie, you are the expert. All I want to do is read in the CDR text, process it and insert it into a table I created that already works insofaras VBA code processing via an ADODB connection from Access. The reason for all this is simple. A CDR package from a vendor starts around $35k for licenses based on the number of PBX systems and phone lines attached, and there is no package that will process data from syslog OR the Mediant 2000 IP gateway as yet.

    The other response with the script looks promising, but I am still trying to figure out if that will help since it just puts the segment between the delimiter in a database, I still have to read that in, process the contents into something meaningful, and write it back out to my table. I might be able to figure that out. I guess this is why CDR applications are so expensive:-)

    The other delimited split routine shown will be slow compared to the one I provided. Read the comments in the code and, if I remember correctly, it has a link to the article by the author of the code. Read it and the discussion that followed it. You will learn much.

  • Lynn,

    Clumn names are here for the working dbo_CDR table I'm using now.

    [call_status]

    ,[channel_num]

    ,[sipcall_ident]

    ,[trunk_num]

    ,[b_channel]

    ,[sipconf_id]

    ,[trunk_group_num]

    ,[endpoint_type]

    ,[call_originator]

    ,[source_IP]

    ,[destination_IP]

    ,[sourcephone_numtype]

    ,[sourcephone_numplan]

    ,[sourcephone_number]

    ,[sourcephone_nummap]

    ,[destphone_numtype]

    ,[destphone_numplan]

    ,[destphone_number]

    ,[destphone_nummap]

    ,[call_duration]

    ,[call_codec]

    ,[packet_intervall]

    ,[rtp_IP]

    ,[rtp_port]

    ,[call_release]

    ,[termination_reason]

    ,[fax_data]

    ,[packets_in]

    ,[packets_out]

    ,[packets_lost]

    ,[header_id]

    ,[rtp_id]

    ,[callsetup_time]

    ,[callsetup_date]

    ,[callconnect_time]

    ,[callconnect_date]

    ,[callrelease_time]

    ,[callrelease_date]

    ,[rtp_delay]

    ,[rtp_jitter]

    ,[localrtp_source]

    ,[remotertp_source]

    ,[redirect_reason]

    ,[redirect_number]

    ,[redirect_numplan]

    There are more columns that fields in the test string BTW since the UTC time date field is split into two adding 3 extra columns for time/date. Just to complicate things.....

    Rob

  • rjbirkett (6/29/2012)


    Lynn,

    Clumn names are here for the working dbo_CDR table I'm using now.

    [call_status]

    ,[channel_num]

    ,[sipcall_ident]

    ,[trunk_num]

    ,[b_channel]

    ,[sipconf_id]

    ,[trunk_group_num]

    ,[endpoint_type]

    ,[call_originator]

    ,[source_IP]

    ,[destination_IP]

    ,[sourcephone_numtype]

    ,[sourcephone_numplan]

    ,[sourcephone_number]

    ,[sourcephone_nummap]

    ,[destphone_numtype]

    ,[destphone_numplan]

    ,[destphone_number]

    ,[destphone_nummap]

    ,[call_duration]

    ,[call_codec]

    ,[packet_intervall]

    ,[rtp_IP]

    ,[rtp_port]

    ,[call_release]

    ,[termination_reason]

    ,[fax_data]

    ,[packets_in]

    ,[packets_out]

    ,[packets_lost]

    ,[header_id]

    ,[rtp_id]

    ,[callsetup_time]

    ,[callsetup_date]

    ,[callconnect_time]

    ,[callconnect_date]

    ,[callrelease_time]

    ,[callrelease_date]

    ,[rtp_delay]

    ,[rtp_jitter]

    ,[localrtp_source]

    ,[remotertp_source]

    ,[redirect_reason]

    ,[redirect_number]

    ,[redirect_numplan]

    There are more columns that fields in the test string BTW since the UTC time date field is split into two adding 3 extra columns for time/date. Just to complicate things.....

    Rob

    For the first run, since those columns are not delimited internally, I would put the data into one column for now, then do additional work on them after the import. Nothing wrong with doing a divide and conquer routine. Set-based doesn't necessarily mean all in one query.

Viewing 15 posts - 1 through 15 (of 94 total)

You must be logged in to reply to this topic. Login to reply