Dervived Column

  • Hi guys:-)

    I am have a package that has a conditional split then a derived column, it runs find for 165000 records but then fails.

    The error is as followed:

    [Derived Column [45]] Error: The "component "Derived Column" (45)" failed because error code 0xC0049067 occurred, and the error row disposition on "output column "Field13" (1443)" specifies failure on error. An error occurred on the specified object of the specified component.

    I have tried deleting the fields in the derived column task after field10 an it works fine.

    Ay help would be much appreciated

  • It would be helpful if you posted the expression(s) that you use in the derived column.

    For some reason, a row has some invalid data that makes the derived column fail.

    Need an answer? No, you need a question
    My blog at https://sqlkover.com.
    MCSE Business Intelligence - Microsoft Data Platform MVP

  • HI thanks for your reply

    I have added an attachment to this post.

    The text file i am trying to import is a single line csv, so as you can see in the attachment i am splitting the rows on the comma's, I have tried deleting the row it fails on and comparing the row to a row that has successfully been imported, it is exactly the same.

    The only thing I remember changing since the last time it ran successfully was Isolation Level and the TransactionOption, would this have an effect on the task, if so what should i set the two options to. 😉

  • I'm guessing there is a datatype issue. I would suggest setting up a error output to a text file and maybe a data viewer on the error path.

  • When I see your derived column in the attachment, I keep thinking:

    there must be a better way. Like parsing the .csv file in .NET or writing the data to a staging table and then split it using T-SQL. There are numereous code samples on the net on how to do this. It will be easier to read, easier to maintain and less error prone.

    Need an answer? No, you need a question
    My blog at https://sqlkover.com.
    MCSE Business Intelligence - Microsoft Data Platform MVP

  • I tend to agree with you. The simple .Split() style function would push the data into an array, and then you could test for empty 'cells'. The issue i kept thinking was, what if a given row doesn't have >9 delimiters (ie only 10 fields). How would the transform handle this right now? Error out likely?

    Steve.

  • I would definitely set up an error destination, even if it's a text field in another table, and change your transformation to send the errors there.

    I have to process files like this myself, except they are fixed-length with multiple occurrences of the "child" records in a single line. Lots of fun to process.

    In your case, with a delimiter, how come you don't set up the input file to output the fields by setting the delimiter in the flat-file connection? Seems like that could be simpler than your transformation.

    Chris

    Learning something new on every visit to SSC. Hoping to pass it on to someone else.

  • i think this is a case of error handling please setup error output you will see why its failing.

  • It appears as though you may have a line that don't have as many fields as you think so when you get past field10, there are no more "," in your line, and therefore the Substing function fails; you may be reading past the end of the line.

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply