August 14, 2013 at 5:35 am
I have my output table setup in my SQL DB, and have tried the SSIS import feature, but i cant get it to work correctly!
Importing the text file into Excel, then importing into SQL gives me an ACE error, even after downloading the compatibility package.
Save as .csv, then changing it to .txt works, but the output table has some data overlap.
I know you can manually set the input lengths in Excel, but it seems like a very long way to do something when i already have the table structure defined!
Is there an easier way to do this ?
I am using sql 2012 and Excel 2013.
Thanks
August 14, 2013 at 5:48 am
I would use SSIS (which is used behind the scenes in the Import Export wizard). What were the errors that you encountered?
Is this a one-time shot, or do you have to upload flat files regularly?
Need an answer? No, you need a question
My blog at https://sqlkover.com.
MCSE Business Intelligence - Microsoft Data Platform MVP
August 14, 2013 at 5:58 am
I guess this wouldn't be the most elegant/easy/fastest solution, so other options will still be welcome.
You can import the file into a (temporary) table of a single NVARCHAR(....) column. After the insert you can distinguish the different values by using the SUBSTRING function. The code would look something like:
DECLARE @first_start_position INT
DECLARE @second_start_position INT
DECLARE @third_start_position INT
...
SET @first_start_position = 1
SET @second_start_position = 12
SET @third_start_position = 50
...
SELECT
RTRIM(SUBSTRING({imported_column}, @first_start_position, @second_start_position -1)) as value_1
, RTRIM(SUBSTRING({imported_column}, @second_start_position, @third_start_position -1)) as value_2
, RTRIM(SUBSTRING({imported_column}, @third_start_position, @fourth_start_position -1)) as value_3
, ...
INTO {destination_table}
FROM import_table
August 14, 2013 at 6:01 am
Another possibility is to use a format file to define your data layout and then use the BULK INSERT command to actually perform the data load. The format file defines the source and data types, lengths, etc. I prefer this method for data loads that have to be repeated. I've found it to be very reliable and you can use it right from SQL.
The one caveat to point out up front is permissions.
1. The file must be in the file system and accessible from the server
2. The the user must have permissions to read the file.
3. The user must have the bulkadmin server role.
Here's the MSDN page to get you started. http://msdn.microsoft.com/en-us/library/ms178129%28v=SQL.90%29.aspx
HTH
August 14, 2013 at 11:27 am
I found the problem !
The data structure for the text file provided by the client was wrong ................... duh !!!!!!!!!!!
It was actually a COBOL data structure, i have not seen one of them in over 25 years.
The Filler field at the end was PIC X(04) and should have been PIC X(06), when importing 700k records it certainly made a mess!
Thanks for the feedback.
August 14, 2013 at 10:24 pm
isuckatsql (8/14/2013)
I found the problem !The data structure for the text file provided by the client was wrong ................... duh !!!!!!!!!!!
It was actually a COBOL data structure, i have not seen one of them in over 25 years.
The Filler field at the end was PIC X(04) and should have been PIC X(06), when importing 700k records it certainly made a mess!
Thanks for the feedback.
Just curious... what is the total length of each row, how many fields did you have for each row, and how long did it take to import the 700K rows?
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply