November 10, 2010 at 1:55 am
Hi there,
Using SQLServer 2008, I have this question. For a new customer I must set up a database. Two tables in particular will be very large, named tblCampaign, and tblCampaignHistory. tblCampaign will grow by estimated 200,000 records a week. tblCampaignHistory (keeping track of mutations of the former) with estimated 600,000 per week.
The rowsize of tblCampaign is not known yet, because the customer still has to define the campaign, but it may add up to be 60 columns with a length of 4,000 - 5,000. The rowsize of tblCampaignHistory is 22 columns with a length of 2074. tblCampaign's PID is a clustered index, used most of the times (but not always) for the relationship between the tables.
At peak moments, up to 400 concurrent users will be accessing (read/write) the table.
Question: Should I set it up like this? I could also break the campaign up into segments. Not one big tblCampaign/tblCampaignHistory pair with 400 users, but ten smaller pairs with 40 users each. Or maybe even set up more than one database (more than one server is not an option though).
Thanks for any input on this matter,
Raymond
November 10, 2010 at 3:55 am
I would look towards partitioning. Although the end user will see 1 table, the physical file groups will be stored on different drives for speeds of access.
Possibly create non-clustered indexes on columns that will be queried frequently.
November 10, 2010 at 9:28 am
Partitioning is important, but also if these are LOB type columns in here, think about vertically partitioning because of some of the issues around index rebuilds with LOB columns (no online ops).
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply