November 11, 2006 at 6:19 am
I learned database design informally from while being a coder on
Projects back in the 1990’s. Back then (as I remember) when we
Had very large database with the multiple table, multiple part keys
We handled it as follows because we found the databases would
Scale better.
I
Here is the problem I have and how I plan on implementing the keys
And I want to know if this is the best way for the databases to
Scale for large Db and queries.
Table A
------------
A_ID (PK)
Table B
-----------
B_ID (PK)
A_ID (FK)
TABLE C
-------------
C_ID (PK)
B_ID (FK)
A_ID non-key, no ref. integrity build in, but I handle integrity on inserts.
TABLE D (Link Attribute table)
-------------
D_ID (PK)
B_ID non-key, no ref. integrity build in, but I handle integrity on inserts
C_ID (FK)
E_ID (FK)
TABLE_E
-------------
E_ID (PK)
For queries will often want to show all of the quantities for each table for a
Given value in table A. My thoughts is that with this structure I optimize
This query path, cutting down on long multi-path joins AND HOPEFULLY
MAKING THE QUERIES MORE EFFICIENT FOR VERY LARGE DATABASES.
But is this true?
November 14, 2006 at 8:00 am
This was removed by the editor as SPAM
November 14, 2006 at 11:39 am
In TABLE C does the combination B_ID and A_ID need to exist on table B ?
As a general rule, FK's should not impact the way you query the DB. It has a very small impact on performance whn you INSERT the record.
If you create Indices on all the FK columns, the join between the FK to it's PK parent should be easy. I do it on tables with a few hundred million records and it works just fine.
FK's are best enforced by the DB since program logic is more bug prone. Also, you can insert data without going through the application and hence violate refrential integrity.
Good luck....
November 15, 2006 at 6:14 am
thanks for the reply
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply