April 9, 2006 at 7:50 am
I am importing a table into an identical table in another database
the only difference in the structure is the new table has a row_id column whis is an identiy column which should increment by 1 with each new row.
This is a large table with 430 million rows
The DTS runs and then, seemingly at the end (six hours later:crazy, it fails with the error that you can not insert a null value into this column. This column does not exist in the source table and No data is being inserted into this column as it should be auto generated.
I have seen this before and wound up dropping the column, inserting data, creating the column later and filling it with sequential numbers through code then making it an identiy. Do not want to try that on this table if I can avoid it since it is quite a large table.
Can not find the answer after many hours of research.
Anyone know how to fix this?
April 9, 2006 at 8:01 am
Just wanted to add that the set identity insert is not checked (tried it both ways already)
April 10, 2006 at 2:55 am
Can't help much on the DTS part, unfortunately, but if this is a simple task like picking data from one table and inserting them into another on the same server, you could use a straight T-SQL statement for it..
INSERT otherdb.dbo.newTable (col2, col3, col4...)
SELECT (col2, col3, col4...)
FROM thisdb.dbo.oldTable
WHERE .....
/Kenneth
April 11, 2006 at 4:59 pm
First thing I would recommend would be to limit your batch size so that you're not trying to commit the entire transaction at one time if you haven't already done that. This should be the Insert Batch Size option. Perhaps set at 10k or so for starters.
Use the Exception file - this will give you something you can look at to verify that the rows are mapping correctly.
As a last check, verify that the table exists on the destination server with an IDENTITY column and that nothing is attempting to map to that. At the very least, using a smaller commit size should return the error a lot faster.
-Pete
April 12, 2006 at 6:33 am
Thanks Pete -
I finally got this data in by playing with the batch size and allowing 10 errors.
(identity was not mapped)
I am very interested in you comment on the exception file - where can i get more info on that?
April 12, 2006 at 9:02 am
I'd probably just set it up - use a file like C:\MyDTSPackageName_Error.txt for the exceptions and let it run. When there are errors, the output (and the row? - been a while) will appear in that file. Just make sure that you look in the appropriate place for the file - if you ran as a job on the server, it will probably be on the server. If you run locally, it will be local.
Clicking the "Help" button while on that page should also point you in the right direction. As mentioned, it's been a while since I used it.
On an interesting side note, SSIS has the ability to pipe errors into a separate output so you can still work with rows/data that have problems. Can even put a watcher on it to see what's going on when in debug mode.
-Pete
April 12, 2006 at 9:22 am
I think it does not work with bulk transfer which i have to use for this volume of data
Oracle is the standard here so not sure i can get an upgrade to Sql Server 2005
Thanks Though
April 13, 2006 at 5:02 am
Thinking about it, this seems to be a perfect job for BCP, and just skip DTS altogether. If the job is just to load the table from a file, then this is what BCP does, and does well.
You need a format file, though, due to the identoty column in the table.
Look up bcp.exe in BOL and see if it's something worth trying.
/Kenneth
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply