September 7, 2009 at 4:38 am
Hi,
I need to design database for enterprise solution (office automation software).in this database i have a critical information that i'm not sure put all data in one table or derive data and put them into 2 tables(use primary and foreign key,...).
I have a heavy search on this data (INSERT is fewer than SELECT) .
now this is my question :
for having best search performance,what should i do? (2 table or 1 big one)
note that,if i have two tables, I should join them for search!!
September 7, 2009 at 6:23 am
farax_x (9/7/2009)
Hi,I need to design database for enterprise solution (office automation software).in this database i have a critical information that i'm not sure put all data in one table or derive data and put them into 2 tables(use primary and foreign key,...).
I have a heavy search on this data (INSERT is fewer than SELECT) .
now this is my question :
for having best search performance,what should i do? (2 table or 1 big one)
note that,if i have two tables, I should join them for search!!
If I read your question right, you would always be joining the tables anyway for your search.. If that is true then I can't see using two tables unless your row length is over 8000 characters. The join will cost processing cycles but once the page is in memory it is a pretty cheap operation. Now if your row length is over 8000 characters you could place fields that you are not going to search in the other table and then join to it. After the primary table filters down the records the join should be much cheaper..
CEWII
September 8, 2009 at 2:02 am
Hi
Normally you design tables keeping data integrity and the level of normalization in mind. So If the data is related to one entity then it goes to one table and if data is different then you need to put it in multiple tables. In both the cases you may have Pkey/Fkey relationships with other tables. If your data is more than 8060 bytes per row then of course you need to put the data in multiple tables.
If you are asking simply from a performance point of view then there are different factors to keep in mind. Proper indexes, good hardware (memory specially) and most important of all properly written queries can influence performance if you have one table or multiple tables. You need to test out both the scenarios to come to a proper conclusion.
"Keep Trying"
September 8, 2009 at 10:37 am
thanx for your response Chirag 🙂
September 10, 2009 at 8:51 pm
Chirag (9/8/2009)
If your data is more than 8060 bytes per row then of course you need to put the data in multiple tables.
This is no longer true in 2005 and later for varchar, nvarchar, varbinary, sql_variant, and CLR user-defined type columns. Column widths are still a maximum of 8000 bytes, but row size can exceed this by a significant margin using ROW_OVERFLOW_DATA. MAX data types will also allow you to store up to 2GB per column (as will the old LOB types, but they are deprecated).
Example:
USE tempdb;
CREATE TABLE dbo.SearchMe
(
row_id INTEGER IDENTITY PRIMARY KEY,
data NVARCHAR(4000) NULL,
more_data NVARCHAR(4000) NULL,
even_more_data NVARCHAR(4000) NULL,
);
INSERT dbo.SearchMe
(
data,
more_data,
even_more_data
)
VALUES (
REPLICATE(NCHAR(17599), 3998),
REPLICATE(NCHAR(48000), 4000),
REPLICATE(NCHAR(10000), 3999)
);
SELECT data,
more_data,
even_more_data
FROM dbo.SearchMe;
DROP TABLE dbo.SearchMe;
I'm not saying it's a good idea necessarily, just that the original statement is not quite the whole story.
More details here.
Paul
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply