i need help creating foreignkey relationships using a bulkinsert from a couple csv files

  • Hi,

    I have probably asked this question before but cannot remember, If I have bare with me because I probably wasn't successful last time.

    First I'll explain what the problem is then a solution i am looking for.

    I have 2 csv files, which I already know how to import into sql, one csv file is a Loading Schedule, the other a Completed Orders file.

    So in my database i have a table strickly for the Load Schedule, with an autoincrementing ID, that table has 9 columns in it and thats including the autoincrement ID.

    Next I have a table strickly for the Completed Orders, also with an autoincrement ID, that table has 11 columns and that also includes the autoincrement ID.

    I am not sure if that is to many columns per table or not. When i create my tables from scratch i usually like to keep my columns to around 5 or 6 per table, and have foreignkeys to eachother. I guess this would be called normalization? anyways...

    So now both of these tables each have identical order numbers. What i want to is break down my tables to have 5 or 6 columns. But this is where i need some guidance, not with turning these 2 tables into smaller ones, but to foreignkey these smaller tables.

    So i guess what i am asking is how do insert all this data from the csv files to the smaller tables and foreignkey them altogether by using the autoincremented ID's as the foreignkeys, here is a test example...

    Table 1

    T1ID OrderNumber Tonnage CustID DateOrdered

    Table2

    T2ID T1ID DateShip CarrierID WeightID

    Table3

    WeightID T2ID TareWeight GrossWeight NetWeight

    Table4

    T4ID ScaledIn ScaledOut TimeLeftSite

    I know some of this looks off, but this would not be my final set of tables, this is just to give a kind of a graphic representation of what i am talking about

    So when i insert into Table1, I want to take the @@IDENTIY of that and place that into Table2, etc...

    I am not sure how to go about doing this, i can have anywhere from 1 to 50 rows in the csv files.

    I sure could use some samples on how to do this. As well i know i could just as easily use the ordernumber as an alternatekey, but would that not slow down any querys against the tables? would be acceptable to keep my 2 tables as they are?

    if i didn't have to deal with csv files i could do this all easily within my own code, but i am stuck having to use csv file data.

    The csv files..I import them using bulkinsert. I am sure this may be able to be done with if and else statements, i ahve tried something similar to this a long time ago but if failed when the data was not of equal lengths..

    Thanks

    Chris

  • It would would help if you would post some sample data. I would suggest that you attach sample files to the thread and then include the DDL for the tables you want the data loaded into.

    Based on what you are saying there are different ways of doing the loading. For what you want I would suggest looking at using SSIS, but without the information I asked for at the beginning I can't offer more.

  • Adding on to what Jack said, your sample data will help us visualize your FK structure...for example, maybe it sounds like OrderNumber is unique at the highest level, right? so you'd be using Order Number to find the TID number in the parent table for the child tables....

    an alternative would be to make a UNIQUE constraint on OrderNumber, and make the child tables FK to the actual OrderNumber, instead of the TID (which is kind of repetitive, right?

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

  • Hi,

    Tomorrow ( i am on the afternoon shifts this week ) i will post the information that was requested in regards to helping me with this.

    Thanks

    Chris

  • Hi Guys,

    The more i though about it the more i decided to keep the tables denormalized, I figure i would do a backup everyweek then clear the contents of the table and that would probably keep the querys quick.

    Thanks

    Chris

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply