July 10, 2007 at 8:15 am
Greetings all.
I was recently asked to review a vendor installed SQL Server instance for a colleague as she was having trouble with backups and maintenance plans. While I was poking around on the server, I noted that the vendor had created over 50 separate data files for the primary user database. The ratio of tables to data files is a little bit over 2 to 1 (about 120 tables to 55 filegroups). This database supports a document imaging system that is partially in production and currently has about 180 gigs reserved for the data. Only about 45% of this space is currently in use by the data and indexes.
So my question is, can anyone tell my why it would be an advantage to have so many files in the filegroup, especially since all of the data files are located on the same logical drive? This database will probably grow into the terabyte range during the lifecycle of the application hardware. So my initial thought was that maybe they did this for scalability and would perhaps facilitate allocating the data across numerous drives if that was needed in the future. Still, it seems that creating this many files is kind of ponderous and just makes maintenance and monitoring that much more difficult. I am also wondering if overuse of data files has a negative impact on overall server performance, since this is the primary user database for this application.
Any thoughts or opinions are always appreciated.
My hovercraft is full of eels.
July 10, 2007 at 11:41 am
sounds like a little bit excessive but If the aim was to minimize restore time then that can explain why.
Hopefully that was the purpose, but if that wasn't you are just carrying a huge overhead in terms of manageability.
Just my $0.02
* Noel
July 10, 2007 at 12:01 pm
Not to cast stones or aspersions especially since I do not know any details about this situation, but:
At a keynote address back in February, at a Linux conference, the (paraphrased) question was posed, 'Why are so many poorly designed systems rolled out by vendors?" The answer has two parts: 1. the buyer / consumer / user is unable to distinguish the internals between two competing systems. 2. Good systems cost more to build than bad systems do. -- the corollary piece of #2 is that the highest priced system sets the curve. In essence, when the end user cannot tell the difference between a bad system (machine hog) and a good system, when both can achieve the same objectives (resulting differences don't matter when choosing system A or system B), the system with the higher profit margin can win on price.
I suspect that, given individual 'pieces' of the system can be quite sizable, that allowing for dividing and conquering may have been a design principle. Also, perhaps the designer is a former LISP programmer? Lots of possible reasons why, impossible to tell without knowing the product or authors (I'M NOT ASKING!!), but even good(-sounding) design principles can result in poor systems as an outcome.
Steve
July 11, 2007 at 10:21 am
Thanks for the responses.
I suspect this may be a combination of the factors Steve has alluded to in his post. Kind of surprising since this is a large, well known IT corporation.
Regardless, since this is attached to our new SAN the DBA was able to get enough space allocated for full DB backups in pretty short order. They didn't even consider this when the system was set up. I think it's kind of funny that they can install this thing, send us a bill, then just walk away without setting up any kind of maintenance or backup procedures - or even make sure space was allocated for backups. It's going to be interesting to see how this beast performs when it's fully in production. I'm just glad I'm not the one supporting it.
My hovercraft is full of eels.
July 12, 2007 at 1:19 am
Similar issue here... our developers went to town on table and index partitioning. Their first solution was a one to one ratio of partions to files. I had them strike a balance... keeping the more granular partition, but storing those partions in fewer file. I only have 8 files to deal with now. SQL 2005 does a great job of utilizing partitioning for performance, so that was their aim. But from an admins point of view, a pain in the neck for their deviation. I've yet to complete the configuration for individual filegroup backups and restores. Makes it fun trying into move the database back to dev or test!
Todd Carrier
MCITP - Database Administrator (SQL 2008)
MCSE: Data Platform (SQL 2012)
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply