December 30, 2007 at 7:09 am
Hello,
I have been working as a SQL Server 2000/2005 db administrator (small-medium dbs), in the near future I am going to work as a DBA for large DB on SQL Server 2005 (200 - 300 GB and over) but I have not experences.
Do you have any suggestions, Recommendations or useful docs, books or web links and everything else to suggestet me when administering large databases.
Thank a lot.
December 30, 2007 at 11:14 am
I don't know that there's a good collection anywhere that you can reference. Lots of blogs and short notes/articles around. Search "VLDB SQL Server".
You will probably have to consider filegroups and multiple spindles/arrays to get good performance. I had a 700GB database and it wasn't too different, just that things took longer, we had issues finding spaces for restores to test and for development. Had to be more careful about queries. Cross joins can kill you.
These days you can still get 300GB on one disk, so I'm not sure this is a VLDB. More a large database.
December 30, 2007 at 3:50 pm
Hello,
thank for your contribution,
I have alraedy taken into consideration to spread files into different logical disks on small-medium db and thus in VLDB.
I will check any suggestions around interest.
Any other contribution is welcome.
Thank a lot.
December 31, 2007 at 7:16 am
Another thing that becomes important is efficient querying. Time spent on Query Tuning might be a good investment.
December 31, 2007 at 10:03 am
Since it is going to be large DB, make sure you keep an eye on the Indexes. Keep an eye on Statistics.
I came across an issue where a Stored Proc that was taking less than 50 Millisec suddenly started taking more than 500 milli sec. At first it had me stumped. Then I marked the stored proc for recompile and it became allright.
Try bringing in a Stored Proc standard so that the DB is not messed up. Try to make the programmers understand that every EXTRA Read done on the table is counter effective on the performance.
One more thing is to make sure that you have the right tools to take compressed back ups. Or else you will have very large backups that are difficult to store and transport.
-Roy
December 31, 2007 at 10:41 am
I mentioned it, but Roy's post made me think about it again. Be sure you argue for a development/test system that's the same size. Becomes really hard to work on things if development is 1/10 the size of production.
December 31, 2007 at 1:13 pm
Make sure you have backups. Make sure the backups are being written to tape. Make sure the backups are not being stored locally.
Backup everything.
Make sure the production database is in FULL mode. Make sure you are backing up the logs every 15 minutes.
The theme here is backups (did ya get that?)
There is a 85-99.5% probability you will do something wrong. Your question leads me to believe you are nervous (which is better than being over-confident, as you'll be more careful)
So, to relax your nerves--- make sure you have backups and you can recover.
The #1, most important job of any DBA is to be able to protect the data (regardless of size). You can't protect the data if you can't recover it.
If the data is not being backed up and/or logs are not being backed up and/or the backups are being stored locally, these are areas that need your immediate attention. My advice it to touch nothing until you resolve these issues.
January 1, 2008 at 2:06 am
Speaking as someone with a DB on it's way to 4TB (growing by around 35-40 GB a week now :w00t: ) I have to agree with everything mentioned above.
You might want to look at table partitioning too, both for performance, and when it warrents it for reducing backup times (filegroup backups are really good for this)
capacity planning and early provision is vital when a system gets large, and as already mentioned you need to have a representative DEV and QA environment so you're not comparing apples to oranges.
January 1, 2008 at 3:29 am
And your QA environment (at the least) should have close to the same amount of data as the production. This is so that you don't get queries that run fine in dev, fine in QA and slower than molasses in a canadian winter on the prod box.
If I had R10 for every time I've heard one of my developers complain "But it ran fine in Dev", I could retire.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 1, 2008 at 4:45 am
Hi
check out Paul Randall's blog in SQLSkills.com.
"Keep Trying"
January 1, 2008 at 10:32 am
sorry but 300Gb is not a large database. I seem to recollect that the definition of a large database is when you can't back it up within a 24 hour window.
The principle of database size is not dictated by gb alone, there are all sorts of factors which interact, a database with many tables of over 500 million rows can present some different challenges, as do tables which grow quickly, say, several million rows a day or a database which grows 50gb per day.
Each situation will have different solutions, work arounds and techniques, there's no magic solution to apply to a 1TB database for instance. Your maintenance window, if you have one, and available off line time or available down time can dictate your actions - and we've not even considered hardware - especially memory and disk i/o throughput which can have a significant impact too.
The basic things you do for any database should largely remain constant regardless of size.
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
January 1, 2008 at 3:35 pm
yep, definately horses for courses 🙂
January 2, 2008 at 4:35 pm
Hello,
thank to everybody for your useful contribution that may help to plan and organise my new adventure.
Some of your suggestions were already used, other will be analized in depth and applied.
January 3, 2008 at 7:54 am
Get the company to invest in Quest Software probrably the best software i bought for SQL.
I can see the entire status of the database in a graphical display from CPU, MEMORY, Read, Writes and it alerts when there are problems.
Also download isostress test tool from MS and run this on their system to see what the status is of the IOs if the IO is low then it will be a problem to increase performance without hardware.
January 3, 2008 at 8:28 am
Consider 3rd party backup compression software such as idera's SQLSafe (we use it) or others by Redgate or Quest. Our 280 GB backup only takes 30 minutes and uses 45 GB of space (compared to 90+ minutes with native SQL backup).
We backup all the DBs to disk, then write them to tape afterwards.
Viewing 15 posts - 1 through 15 (of 23 total)
You must be logged in to reply to this topic. Login to reply