What size of DB does SQL Server REALLY handle?

  • We're currently investigating software for a Warehouse management system, in which the vendor has their application sitting on top of a MS SQL Server database.  As per the vendor, this database will be up to 800GB in size.

     

    From my experience (a bunch of Sybase, a ton of Oracle, and a little bit of SQL Server), this seems "large" (or perhaps unmanageable) for a SQL Server database running on Windows.   My experience with SQL Server is traditionally it's used for MAYBE 60GB databases as backends for pre-packaged applications (database exists totally unbeknownst to the guys in finance/hr/payroll who bought the application).

     

    Looking at the "official" documentation, one would believe you can run 10TB SQL Server databases with ease.  My experience on SQL Server, does not really expand over about 100GB+, and THAT was a pain on windows.  800gb on Oracle/Sybase on Unix, no problem... on SQL Server on windows, er...I don't know.

     

    Not looking to start any "war of the databases part XI", but just looking for words of wisdom based on ACTUAL experience.  No need to quote me the theoretical size of a database from the Microsoft manual.

     

    Thanks,

    John

  • I believe one of the reason why oracle dba's do not like sql server for VLDB's is the locking behavior. As you know that when a user is updating data, oracle will give you an old image of data if the same data is selected but SQL Server will wait untill the data is updated.With warehouse, i don't see this much to be an issue except when you are having your data loads. Again, you can optimize your query to resolve this. I don't blame anybody but it is the mindset an Oracle DBA has compared to a SQL DBA. Apart from that, I have seen very few bugs with SQL compared to oracle. I think you will find it much simpler to manage a SQL server database to an oracle database.

  • I have *actual* experience with several databases over 200 GB.  I'm currently working with serveral holding financial data which are over 400 GB and expected to grow to several terabytes.

    There are a lot of factors involved.  Look into 64-bit SQL Server (which I've never worked with, but would love to).  Look into partitioned tables.  A lot of it depends on database design, of course.  But if you're asking will SQL Server choke and die, it won't.  My users querry tables with over 100 million rows on a daily basis without a problem.

  • I think that may be the biggest point, design. Poor design can be hidden by the way an application deals with it. Make something perform well even with a poor design and you still have a poor design. Althought I have crossed and dropped 100+gb I have no reall hands on time with the larger 500+gb DBs floating around the company altough I have touched. But if you look at what they did with TeraServer when SQL 7 was first developed you can see it can be done and apparently quite well, it is all in your design.

  • See, if this helps:

    http://www.microsoft.com/sql/techinfo/administration/2000/rosetta.asp

    http://www.microsoft.com/technet/prodtechnol/sql/2000/plan/ssmsam.mspx

    http://www.microsoft.com/sql/techinfo/administration/2000/scalability.asp

    http://www.wintercorp.com/

    MS has a bunch of case studies of VLDB customers on their site. Unfortunately I can't find the link right now. A friend of mine worked with one of the largest OLAP installations on SQL Server at some 4.5 TB. They had their problems, but not with the size itself, but rather with design and implementation. When you know in advance, that your db will grow beyond that 1 TB barrier, best thing, IMHO, you can is to hire a consultant with proven experience with databases of that size. That might cost some money, but should save a lot of frustration.

    Oh, and btw, nobody considers 800 GB a large database these days.

    --
    Frank Kalis
    Microsoft SQL Server MVP
    Webmaster: http://www.insidesql.org/blogs
    My blog: http://www.insidesql.org/blogs/frankkalis/[/url]

  • I have 5 years experience dealing with SQL databases 100 GB+ (I have one that is currently 600 GB). We are a Data Warehouse, mostly read-only, many ad-hoc non-sequential queries environment. Yes, there can be problems with performance if you don't design well, using filegroups, understanding how your customers will read the data, timing when to do uploads so customers are not blocked from reading, etc. So far, I have not found a size that I am not comfortable releasing to my customers because of performance issues.

    On the other hand, when you get to databases of this size, you have to consider your backup system. We currently use LTO and with SQLLiteSpeed, I am able to make my backups/restores to disk and then write to tape.

    Hope this helps,

    Michelle



    Michelle

  • Same here we have datasets as large as 600GB and one that is around 2.5TB all on SQL Server. Its like anything else, when you get to those sizes you have to manage file groups, files, index placement and the like. No system handles 800GB out of the box they all have to be put on apropreate hardware and setup correctly.

  • Dare I say... You MIGHT want to contact your Microsoft rep. They MAY be willing to help you... I know here Oracle used to not exist and we were a solid DB2 and SQL Server shop and now all of a sudden Oracle is here is massive quantities. Our Microsoft reps. are very nervous about loosing us to them and have said repeatedly that they are more than willing to assist us in a large type implementation. They seem very confident in Win2003 and SQL Svr 2000 64 bit for large databases. 

  • Of 25 sql servers that I manage in the healthcare industry, 5 of them exceed 1 Terabyte in size. The smallest server contains ~8.5 billion rows of data across 4 databases, and around 200 tables.

    Certainly the work to manage this volume of data increases with size and complexity as you then have to manage the file system and the hardware layer much more intensively, but this should be standard practice in any environment.

    In summary, just as with any other piece of enterprise software, the real question is do you and/or your staff have the expertise to make this happen, not whether or not the software will handle the load.

  • At JD Edwards, our warehouse ran on SQL Server, 1 app server, 2 web servers, and we supported a 600+GB database very well. Ran much better than the 12 server Peoplesoft one we merged into.

    I had a friend in Colorado who ran a 1.3TB SQL Server database on v7, later 2000. I have heard the there are 6+TB databases, but don't have references.

  • we have 2 production databases at 600GB running on DataCenter. I would suggest to configure propertly RAID (we use 1+0) and filegroups, tempdb ( you may need much more then 1 datafile) and also you might want to design a reindexing strategy (for OLTP)  if indexes get fragmented.

  • I manage several SQL Servers with 100GB - 600GB databases. Obviously the hardware has to be up to the task. You're not going to get Oracle to do it on a couple of 300GB 5400RPM IDE drives, don't compare a serious SQL Server machine with something you'd give a secretary to run Office.

    But I think part of the perception of Oracle being superior to SQL Server has to do with SQL Server's ease of use. An Oracle shop is going to have a very well-paid DBA or DBAs, maybe a team of data modelers and analysts, and they're going to keep the developers under tight control as far as design quality and standards. A SQL Server shop might get by with having the developers occasionally babysit the server, but mostly create databases as easily as in Access (and about as well-designed). I'm the DBA for a company with 10 SQL Servers and constant development, and DBA duties only account for about 10-15% of my time. Several of the people I work with have come from Oracle shops and wouldn't be upset if we switched, but they agree that if we switched we would either need some high-priced full-time DBAs or the servers would die from neglect, and developers would not be creating and modifying databases at will. So we're a little on the sloppy side, but the developers like it that way and they've got me outnumbered.

    If you decide from the start that you're going to have a certain level of quality, some serious DBA and data modeling talent, and appropriate hardware, I don't think you'll find a big performance margin for either brand, certainly not at the <1TB level. I would recommend looking at the software costs for similar configurations (for SQL Server this size warehouse probably needs the 64-bit version), and the payroll costs. You'll probably have to pay more for good Oracle people, but you might be getting your money's worth. If you've already got a significant level of Oracle or SQL Server expertise, it will be expensive to go the other way.

    Either way, don't let those damn Visual Basic programming varmints take over.

  • Here is a link (from 1 year ago) that documets the largest databases sitting on SQL Server.  While this is a year old and I personally know of many more databases in this range that are not listed, this list does prove that SQL Server can handle multiple terabytes of data.

    http://www.microsoft.com/sql/evaluation/compare/wintercorp.asp

    I work at a company specializing in backup and recovery (Imceda Software - makers of LiteSpeed for SQL Server) and I run into multi-terabyte databases every day.  I would have no problem using SQL Server to handle a database of any size.

     

Viewing 13 posts - 1 through 12 (of 12 total)

You must be logged in to reply to this topic. Login to reply