Error in Dataflow Task

  • Hi

    I got below error

    Error: 0xC02020C4 at MISPCST txt to MISPCST, Venkat [1]: The attempt to add a row to the Data Flow task buffer failed with error code 0xC0047020.

    Error: 0xC0047038 at MISPCST txt to MISPCST, SSIS.Pipeline: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on component "Venkat" (1) returned error code 0xC02020C4. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure.

    I have source as Flat file as source and OLEDB as Destination and Created a data conversation to match source column length and destination column length.

    My total records: 8976 but successfully loaded only 4488.

    Please help me.

    Venkat

  • Check line 4489 for an extra delimiter character

    😎

  • there is no extra delimiter character at 4489

  • venkat5677 (5/19/2014)


    there is no extra delimiter character at 4489

    Ok, next thing then, is there a varchar(max)/nvarchar(max) (text_stream) or a float/real column in the dataflow?

    😎

  • Yes, float column in the dataflow 🙂

  • venkat5677 (5/20/2014)


    Yes, float column in the dataflow 🙂

    Check for missing/empty values, these are known to cause this error. A workaround would be to "replace" the missing values with 0 (zero).

    😎

  • Hi,

    Thanks for your inputs,

    But I didn't find any missing/empty values. But I can able to process 4000 records in a single file, but I am not able to do all my 8976 records in a single time.

    So I changed setting in OLE DB Destination like below.

    rows per batch: 500 (earlier this is 1)

    Maximum insert commit size: 2147483647 (earlier this is 0)

    but still my problem not yet solved.

    Please suggest me ASAP.

    Regards,

    Venkat

  • Change the order of rows in the input file and then see whether it fails at the same place. Just moving the first 1000 rows to the end, for example, should be enough.

    If it fails at a different place from before, there is almost certainly something in your data which is causing the issue.

    The absence of evidence is not evidence of absence
    - Martin Rees
    The absence of consumable DDL, sample data and desired results is, however, evidence of the absence of my response
    - Phil Parkin

  • I can able to move 1000 rows in a single time like this I can complete total 9000+ records, but when I try to process entire 9000 records single time it's fails @ 4488

    Thanks,

    Venkat

  • venkat5677 (5/22/2014)


    I can able to move 1000 rows in a single time like this I can complete total 9000+ records, but when I try to process entire 9000 records single time it's fails @ 4488

    Thanks,

    Venkat

    OK. But did you even try my suggestion?

    The absence of evidence is not evidence of absence
    - Martin Rees
    The absence of consumable DDL, sample data and desired results is, however, evidence of the absence of my response
    - Phil Parkin

  • Hi

    I done below steps:

    I added first column from source to destination and executed I can able to send all records from flat file to OLE DB and second , third columns in same way until 50 columns . But when I add 51 column I am not able to send all my data from Flat file to OLEDB.

    Now I go back to flat file and checked 51 column and I found there is space in between data.

    (example First Name Last Name) in single column.

    Thanks,

    Venkat

  • venkat5677 (5/22/2014)


    Hi

    I done below steps:

    I added first column from source to destination and executed I can able to send all records from flat file to OLE DB and second , third columns in same way until 50 columns . But when I add 51 column I am not able to send all my data from Flat file to OLEDB.

    Now I go back to flat file and checked 51 column and I found there is space in between data.

    (example First Name Last Name) in single column.

    Thanks,

    Venkat

    OK, as your responses seem to bear no relation whatsoever to my suggestions, I am going to leave you to it.

    The absence of evidence is not evidence of absence
    - Martin Rees
    The absence of consumable DDL, sample data and desired results is, however, evidence of the absence of my response
    - Phil Parkin

  • What are you using as a column delimiter and text qualifier?

    Regards

    Lempster

  • Note that in your dataflow, click the green arrow and see that you can add a data watcher. You'll see the data as it flows from the source and BEFORE the destination. This can help you going forward to identify problem rows.

    ----------------------------------------------------

Viewing 14 posts - 1 through 13 (of 13 total)

You must be logged in to reply to this topic. Login to reply