November 29, 2006 at 4:42 pm
I have to store a series of unique numeric values in a database table. The longest of these is 20 digits and every time I convert the generated text file into my database table this number is truncated to 16 characters with trailing zeroes. I have tried a variety of data types to retain the integrity of this value, but I cannot get anything to work. As an example the unique number 12345678912345678912, becomes 12345678912345670000. I would rather not demand that the number come as a text value, but if that is the way I have to go with it, then that is the way it has to be. Any suggestions would be helpful.
- Ed Baker
November 29, 2006 at 5:05 pm
Hi Ed,
Which data types have you tried and which method are you using to import the data into the table?
Greg
Greg
November 30, 2006 at 3:26 am
Try numeric(38,0). It worked ok for the number you quoted
November 30, 2006 at 8:44 am
Nigel,
Thanks for the info, I will give that a shot. As for the import method, it is classic ASP, using ODBC DSN connection strings. The primary recordset is created from a text file, it is then inserted into the table. I am still exploring the use of BULK INSERT, but I ran into an issue with the way our environment was set up. Thanks again and any further assistance would be great.
- Ed
November 30, 2006 at 9:12 am
If the numbers are all integers, try BIGINT.
-SQLBill
November 30, 2006 at 9:46 am
bigint is too small for his example.
November 30, 2006 at 5:03 pm
Yeah, I found that to be true.
On a side note the processing was being done using DSNs and Microsoft ODBC. I found that if I am working directly on the server and do a Bulk Insert, the formatting of the table and values end up as I want it. As mentioned, when the processing is done from the web application through the ODBC connection the formatting is fouled. I have solved the problem by taking a different development approach and eliminating the processing of the flat file through the Microsoft ODBC driver. I instead create a textstream, processed to an array, and insert the array values into each field, of each record, in the table. While I am sure that performance could be an issue in the future, I was able to process 20,000 records in under two minutes using this strategy. I will just have to keep an eye on it.
Please feel free to add any other useful suggestions, and if anyone has experienced a similar issue with the MS ODBC text driver let me know.
Thanks,
Ed
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply