I have some questions here that I would like clarification on:
I have a huge database, 1.8 terabytes in size. Please can you inform me of any best practices ?
- At the moment, the transaction logs are stored on one disk, whereas the datafile are in multiple filegroups, is this the best approach ? Should the logs be spanned onto various files, and If I have 8 file groups, how many transaction log files would be ideal ?
- Also, as I need to create a copy of the live database onto a test environment, what are the nest ways of moving the database, i.e is it better to detach and copy huge files like that across
- OR
- Make a backup of the filegroup concerned, and restore it onto the test server.
- Also, what would the backup and restore strategy look like, I would recommend log shipping.
- I will also be looking at archiving old data, I was thinking of BCPing the tables out then storing them somewhere, or creating a database on the fly, then copying the tables that needs to be archived onto the database. This way, would the indexes be intact, as it’s a simple select into, I have got a feeling that the indexes wouldn’t be copied into the database. How can I copy a table index, when doing a select into.
- The other question would be what areas of performance tunning, what ares am I missing out under the following headings:
Locks
Disk read/writes
Re-compiling stored procedures
Looking at how queries are written.
Connections
Looking at performance counters.
8. The databases are located on a SAN storage set up, what would be the main things to look out for here, I need to boost performance !
I will be looking forward to your helpful replies. I would also be replying questions ASAP
Kindest Regards,
John Burchel (Trainee Developer)