September 26, 2005 at 1:24 pm
I have a DB which has single table for storing logs from devices. This DB needs to retain up to 200 GB of log data. Table will be hit with multiple inserts 24 x 7.
I need to have
- Daily Purge Process which will delete upto 10 GB of data per day requiring minimum time. (could be deadlocked with the inserts)
- Good select performance on the table; so that the logs can be viewed for a device easily.
- Minimum Backup time
Since DB has only single table with a clustered index on it. I was thinking of have one data drive , one log drive and one backup drive.
Considering all these what would be the best DB design?
September 27, 2005 at 3:17 am
Investigate partitioning....you should be looking to spread different days data into different partitions, and then use a view to "unify" the data into 1. This should allow the purging to take place with minimal impact on the inserts/reads.
September 27, 2005 at 4:17 am
I'd also revisit the database/table design. It seems like you're trying to achieve two outcomes in one hit,
1) provide quick insert/delete response 24x7
2) hold 200gb of data for decision support and analysis
In the database world this is usually broken into
1) normalised OLTP tables with minimal indexing.
2) de-normalised Datawarehouse with heavy indexing
--------------------
Colt 45 - the original point and click interface
September 27, 2005 at 10:00 am
Also, when writing any queries for that table be sure to use the with (nolock) option next to your table name. This will ensure that no locks will be held by the select query.
Another thing that comes to mind is to set the recovery model to simple or bulk logged, assuming that you don't need to have everything in your transaction log. This could speed up big inserts.
Hope this helps.
September 28, 2005 at 9:11 pm
Thanks guys! I liked the idea of data partitioning ; that should be good performance wise. But I will have to think if it will suit our business requirement. I will also go for simple recovery model and use of nolocks.
Also, now I am thinking of having data over 3 - 4 drives that should increase the r/w performance of the db. I will take backups on multiple disk devices so that backups are fast as well.
September 29, 2005 at 1:16 am
Are you deleting whilst reading new data (eg users using the app, running selects) - I assume you plan to try to do the deletion during a "quiet" time in the database. I only ask because using the nolock hint may sometimes cause you some issues as SQL tries to return data that includes some data that was deleted causing it to throw an obscure error (which eludes me at the moment sorry). So if you get some strange errors that sounds like data corruption, it may not be the case Sorry I'm so vague, been in meetings all day!!
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply