November 19, 2001 at 1:47 pm
I have a performance problem where I need to import a 36mb csv into a new csv file.
The csv has 6 fields; isbn(char), whse(char), stock(int), backorder(int), discount(char), price(float)
There are four possible values for the whse. I need to merge the data into an unique isbn, stock, backorder, discount, price
What I have been trying to do is parse the rows thru the fso and set the whse and stock fields into arrays. Then write the row to the new file.
Does anyone know of a better solution?
Thanks,
Howard
November 19, 2001 at 3:37 pm
I am not sure what you mean. Can you provide more informaiton, maybe some sample data?
Why not use a CASE statement for the warehouse? I am confused as to why you parse this?
Steve Jones
November 19, 2001 at 4:01 pm
Thanks for the reply Steve,
From the the new csv I need to update a table with the isbn, stock, and price.
This process runs nightly. Right now it takes about 40 minutes for just this part.
The process includes checking to see if a zip file exists. The unzipping to another directory, renaming it and running this process. After that deleting the file.
Btw, ther are no headers in the csv. The first field will sometimes have an character at the end so it has to stay as a char. There can be more than 1,200,000 rows.
The following are a few rows:
0001025252,SOM,1,0,H,9.95
0001046403,MOM,0,3,H,14.95
0001047647,MOM,2,0,H,14.95
0001047647,SOM,3,0,H,14.95
0001047655,MOM,3,0,H,14.95
0001047868,MOM,4,0,H,14.95
0001047868,REN,3,0,H,14.95
0001047868,SOM,3,0,H,14.95
0001047973,SOM,1,0,H,14.95
0001048074,MOM,1,0,H,22.95
0001048503,SOM,2,0,H,14.95
0001049143,COM,3,0,H,11.95
0001049143,MOM,1,5,H,11.95
0001049143,REN,5,0,H,11.95
0001049143,SOM,4,0,H,11.95
0001049283,REN,1,0,H,14.95
0001049356,COM,1,0,H,14.95
0001049356,SOM,6,0,H,14.95
0001049690,MOM,0,1,H,14.95
0001049690,REN,1,0,H,14.95
0001050184,REN,1,0,H,14.95
0001050184,SOM,0,3,H,14.95
0001050192,MOM,3,0,H,14.95
0001050494,MOM,1,0,H,14.95
0001050494,REN,0,1,H,14.95
0001052012,COM,0,1,H,14.95
0001052012,SOM,0,1,H,14.95
0001052020,COM,3,0,H,14.95
0001052020,SOM,3,0,H,14.95
0001052349,MOM,1,0,H,14.95
0001052349,REN,2,0,H,14.95
0001052403,COM,0,2,H,14.95
0001052403,MOM,2,0,H,14.95
0001052403,SOM,0,23,H,14.95
Howard
November 21, 2001 at 6:33 pm
I think you'd be better off importing the file first, then doing any data work. Using fso line by line is going to be slow(er) than you'd like I expect.
Andy
November 21, 2001 at 7:54 pm
thanks Andy,
I played with it a little further and came up with a solution.
I created a system dsn and used a query.
"Select isbn, sum(stock) as totalstock, discount, price from bt.csv group by isbn, discount, price"
Then mapped it to a "holding table". It took about 100 seconds to run.
From there I ran a sp to update the stock, price, and discount fields in my products
table. Another 5 seconds.
Howard
November 22, 2001 at 5:49 am
Howard,
Would you mind telling us how many records the 36 mb file import came to?
About 100 + 5 seconds sounds fast.
This might be a good road to go down, on the conversion job that I've got ahead of me.
TIA
Henrik Staun Poulsen
Stovi Software
Denmark
November 22, 2001 at 8:52 am
Henrik
I check the log on what was run last night.
It took 119 seconds for the conversion into the holding table and 7 seconds for the update.
It returned 442,804 rows from 1,264,872 rows
Howard
November 23, 2001 at 8:07 am
Howard,
Thank you very much.
Quite impressive figures.
Henrik
>I check the log on what was run last night.
>
>It took 119 seconds for the conversion into the holding table and 7 seconds for the >update.
>
>It returned 442,804 rows from 1,264,872 rows
>Howard
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply