May 28, 2014 at 2:54 pm
Putting aside the fact that this is a really old thread.
You could prevent the errors using format files and staging tables. That's basically the same rule as using a column list in an INSERT INTO.
May 28, 2014 at 5:56 pm
Luis Cazares (5/28/2014)
Putting aside the fact that this is a really old thread.You could prevent the errors using format files and staging tables. That's basically the same rule as using a column list in an INSERT INTO.
To add to that (and already touched on by Eric M Russell on this thread) , even without format files, you CAN Bulk Insert or BCP into a View which has the correct order for the columns as well as allowing you to skip read-only columns such as IDENTITY, TIMESTAMP, computed columns, etc, etc, etc.
But, to add to what Vladan started and I sometimes agree on, sometimes having columns in order as listed by SELECT * makes life a little easier on the human that's researching a table.
As with all else in SQL, "It Depends". 😛
--Jeff Moden
Change is inevitable... Change for the better is not.
May 29, 2014 at 9:01 am
Ok teach "the old dog" a new trick.
I know of format files yes however, I have now learned that using a format file ignores my truncation error which in my case this time is good. Now I need to learn how to know when there is an error in the background in case it does matter. Thanks. Joe
May 29, 2014 at 12:27 pm
DataJoe (5/29/2014)
Ok teach "the old dog" a new trick.I know of format files yes however, I have now learned that using a format file ignores my truncation error which in my case this time is good. Now I need to learn how to know when there is an error in the background in case it does matter. Thanks. Joe
Take a look at the switch settings for BCP in Books Online. There you will find the necessary switch settings for the number of errors to ignore (I set mine to 2 billion) and how to indicate what the names of error files will be. If you run it with just the number of errors to ignore, you'll get some really informative error descriptions for each bad row on the screen. The errors stored in the files (if you elect to use such an option) are a bit more cryptic but still discernable if you lookup the error(s) on the web.
As for truncation errors, I always setup the format file to allow much larger than what the target table is so that it catches the truncation errors instead of bypassing them.
Whether you chose output of the errors to error files or you simply watch them happen, the switch setting to allow errors without causing BCP to fail will allow the good rows to be loaded without stopping. Here's the switch setting for that from BOL... note the caveats about MONEY and BIGINT. Also note that an error on one line can bleed over onto another IF it disrupts the number and type of delimiters. BCP is NOT row sensitive in the typical sense. It IS delimiter sensitive including but not limited to the end of row delimiter.
-m max_errors
Specifies the maximum number of syntax errors that can occur before the bcp operation is canceled. A syntax error implies a data conversion error to the target data type. The max_errors total excludes any errors that can be detected only at the server, such as constraint violations.
A row that cannot be copied by the bcp utility is ignored and is counted as one error. If this option is not included, the default is 10.
Note:
The -m option also does not apply to converting the money or bigint data types.
As a bit of a sidebar, this is why I almost always use a "staging table". It allows me to do final validations and checks. I'll typically add a validation indication column to the staging table so that I can validate and/or massage data to the required datatype, etc, etc, without locking up the target table to do so and then do a mass insert, update, or delete based on the validation indication.
--Jeff Moden
Change is inevitable... Change for the better is not.
May 29, 2014 at 12:32 pm
Thanks so much for the feed back. Of all the site I hit for answers by far SQL ServerCentral provides the best answers with a readable layout! Imagine that!
Joe
May 29, 2014 at 12:50 pm
You bet and thank you for the feedback, Joe. Us "old dogs" gotta stick together. 😛
--Jeff Moden
Change is inevitable... Change for the better is not.
May 29, 2014 at 1:00 pm
Almost forgot. BULK INSERT has all the same capabilities for importing that BCP has. In the early days, it didn't allow for the sequestration of errors in error files. Why is that important? It's important because that means that it can all be done from a stored procedure if that's what you need.
It's a real shame the MS closed the CONNECT item for a BULK EXPORT as "will not fix".
--Jeff Moden
Change is inevitable... Change for the better is not.
May 29, 2014 at 1:06 pm
Thanks again.
I am starting to use Bulk Insert a lot more in my job now. will have to hone my skillset just a bit. Converting most of our procedures from Access to SQL.
May 30, 2014 at 1:05 pm
WOW - 2005, 2007, 2011, 2014 - what a thread life !
RegardsRudy KomacsarSenior Database Administrator"Ave Caesar! - Morituri te salutamus."
May 30, 2014 at 1:24 pm
Rudyx - the Doctor (5/30/2014)
WOW - 2005, 2007, 2011, 2014 - what a thread life !
Well, column positioning in SQL is one of those eternal questions...
Let's meet up here again in May 2020, but I think the correct answer will still be: Create a view.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
May 31, 2014 at 5:07 pm
2nd with MICHAEL DEPAULA
I did similar thing to do this work.
I know sometime business requirement is to see column data at specific place when user do select top 100 * from table (nolock) etc...
I solved this with below steps
1) Renamed the table to tablename_old.
2) Added new column in tablename_old.
3) Scripted out the table and reorginized the column.
3) Moved data matching the columns of both table.
Thanks..
Viewing 11 posts - 16 through 25 (of 25 total)
You must be logged in to reply to this topic. Login to reply