January 30, 2009 at 2:46 pm
Hi,
Got an application spread across 38 databases, 430 gig used space, 660 gig space reserved on the various disks. We are debating a database up rewrite versus an application re-skinning.
The biggest since database I work with is ~50 gig. Backups/restores with Redgate are not that bad.
Management wants to stick all existing data (massaged into whatever format we have after rewrite) into new database. 1 database.
Anyone out there have experience with a half terabyte database? Is that too big?
Just curious. If you know of a site that talks about this, please let me know
Thanks,
Steve
January 30, 2009 at 3:30 pm
2 TB - but just because it can be done doesn't mean that it should be. 🙂
So, with all things there needs to be engineering to understand what the compelling reasons would be for combining them into one verses multiple. A couple of questions to consider;
1. Are there different utilizations for the tables in the DB?
2. Are there different backup / restore requirements for the tables in the DB?
3. What are the DR requirements for all the tables? Are they consistent?
4. Will there be a necessity to split tables from the DB and put them on to different disks (this could and possibly should be accomplished with files / filegroups)?
I'm sure there are more but my brain is about empty at this point... Late in the day my time...:D (but it is Friday)
David
@SQLTentmaker“He is no fool who gives what he cannot keep to gain that which he cannot lose” - Jim Elliot
January 30, 2009 at 4:01 pm
Sorry, should have had added details.
The information has to do with an online storage rental app. We have all history/transactions since the beginning of time (well, our time). All data is either directly transaction related or report data.
Management wants 1 database for simplicity, I would like to to get some pros and cons other than my personal preferenes. I am somewhat con, since, as you said, just because you can, doesn't mean you should.
One nice thing about being in one database is that it would be easier to do that whole pk/fk thing, which (sigh) the app sorely lacks at this point in time. I am somewhat advocating that active contracts are in one db, moved out in another, report information in a third. That way, active data tables would have a much smaller size than all in one.
Thanks for the response.
Steve
January 30, 2009 at 5:01 pm
Sounds like a perfect case for a single database with table partitioning across multiple disks and data compression on that older, less used stuff.
January 31, 2009 at 2:45 pm
I agree with the partitioning post. Lay out multiple disks, and separate the load of your largest tables across multiple filegroups containing multiple files.
For example:
Orders table has data from 2000 to now.
The data from 2000 to 2006 is not accessed often. You could create a file group to hold the data from 2000 to 2006 and create a partition on the table to put the data from 2000 to 2006 in that file group.
You then create other filegroups for the more frequently accessed data logically separated on other physical volumes. Your data from 2007 to now would then be on these volumes.
The data from 2000-2006 could be placed on lower performing disks, while your active data filegroups would be on better performing disk, for example.
There are a multitude of ways to do it, but this is certainly possible for your situation.
January 31, 2009 at 3:49 pm
500 GB really isn’t that large a DB. I manage several that are much larger on SQL 2000, and SQL 2008 has many features to make large databases easier to manage.
As mentioned on another post, table partitioning and data compression in SQL Server 2008 Enterprise Edition can make it a lot easier to manage. Depending on the type of data you have, table compression can reduce the space required up to 80%, so it is a good candidate for the historical data. When the data is compressed, there is less IO required to get it from disk. The backup compression feature will reduce the amount of disk required for backups.
You should also look into using SQL Server 64-bit to allow easy use of larger amounts of memory. As always, more memory means SQL Server has to read less from disk, and it is one of the cheapest ways to increase performance.
A big advantage of one database is that you only have to maintain procedures and functions in one database, and don’t have to worry about them getting out of sync.
Another approach you should consider is having two databases, one a transactional database with a traditional OLTP design, and the second one a data mart with a star schema design. This allows you to remove data intensive reporting queries from the server where you are processing transactions and to have schemas that are optimized for transactions and for reporting/analysis. Advantages of a data mart are that queries are much easier to write, they are not contending with transactions, and it is easy to implement Analysis Services for use by business analysts.
February 2, 2011 at 6:39 pm
Hi I work with SQL server 2000 - 2008 databases with a loyalty application. Largest approx 10,000,000 customers and 1,500,000,000 transactions about 1.3 terrabytes in size. Single database on split up RAID disks for logs and data etc. 8 CPU 16 Gig memory. This seems to work acceptably. Can I add to this discussion though. I have been asked for some feedback on what I would recommend if this increased 5 - 10 fold in size. Could SQL Server handle it? I don't have much experience with 2008 R2 and splitting tables across disks. Are the performance increases significant? Does anyone know of any SQL installations that big or issues they think we might strike?
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply