May 2, 2015 at 11:25 am
Comments posted to this topic are about the item Complete Definitions
May 2, 2015 at 11:58 am
Such reliance on defaults never ceases to amaze me. First thought is ignorance and competence, finding databases on the OS drive when the server has 15K spinners for the data and log is just bonkers, full recovery without log backup etc.. The funny thing about your example is that the other settings specified are most likely the default values anyway.
😎
May 4, 2015 at 6:55 am
It's not just the defaults. Almost every database I come across has a single PRIMARY file. I always separate the system tables out like so:
USE MASTER
GO
--Note: Filepaths must exist first.
CREATE DATABASE x
ON
PRIMARY (Name=SysTables,
Filename='<FilePath>\SysTables.mdf',
Size=5MB,
FileGrowth=1MB),
FILEGROUP y
(Name=y,
Filename='<FilePath>\y.ndf',
Size=50MB,
Filegrowth=10MB)
GO
ALTER DATABASE x MODIFY FILEGROUP y DEFAULT;
GO
Changing the file group default from PRIMARY ensures that the user objects stay separate from the system objects.
May 4, 2015 at 8:59 am
RonKyle (5/4/2015)
It's not just the defaults. Almost every database I come across has a single PRIMARY file. I always separate the system tables out like so:Changing the file group default from PRIMARY ensures that the user objects stay separate from the system objects.
Why or how does this matter? I understand what you are doing, but not how it benefits the DBA or the system.
On Steve's main point I have seen a number of vendor provided scripts that fail due to incorrect assumptions, but trying to get your vendors to improve is almost as difficult as putting a man on the moon.
Dave
May 4, 2015 at 9:11 am
Why or how does this matter?
It makes the database a little more robust because isolating the system tables from the data tables and indexes reduces the chance that the system tables will be in a corrupt file group. DBCC checks can be made at the file group level, so in a large database having multiple file groups makes the check easier (even possible for very large databases). Databases can be backed up and restored by file group. Having a system backup that can easily and quickly be restored helps you get large databases online more quickly, at least for querying, while the other files are being restored.
May 4, 2015 at 9:24 am
djackson 22568 (5/4/2015)
RonKyle (5/4/2015)
It's not just the defaults. Almost every database I come across has a single PRIMARY file. I always separate the system tables out like so:Changing the file group default from PRIMARY ensures that the user objects stay separate from the system objects.
Why or how does this matter? I understand what you are doing, but not how it benefits the DBA or the system.
On Steve's main point I have seen a number of vendor provided scripts that fail due to incorrect assumptions, but trying to get your vendors to improve is almost as difficult as putting a man on the moon.
Actually, when given the opportunity I would also create a separate file group for the data and make it the default file group. I would go even further if it could benefit the system by creating a separate file group for nonclustered indexes, another for indexed views.
One benefit is that only system objects reside in the primary file group, including definitions of views, stored procedures, functions, and other database objects. The file groups for the data, indexes, and indexed views can be moved to separate spindles or SSDs if available.
May 4, 2015 at 9:24 am
RonKyle (5/4/2015)
Why or how does this matter?
It makes the database a little more robust because isolating the system tables from the data tables and indexes reduces the chance that the system tables will be in a corrupt file group. DBCC checks can be made at the file group level, so in a large database having multiple file groups makes the check easier (even possible for very large databases). Databases can be backed up and restored by file group. Having a system backup that can easily and quickly be restored helps you get large databases online more quickly, at least for querying, while the other files are being restored.
If you have EE, I tend to agree that moving stuff out of the Primary file group makes some sense. However, for most of the databases and instances, I'm not sure it's worth the complexity that it creates for others that might be administering the system.
May 4, 2015 at 9:31 am
I see it as creating flexibility regarding the database. Some databases may always be small, but for those that become large, being able to move things around to new disks without having to move data between file groups. Or if your have limited disks in development but know for sure that files can be moved to separate disks in production, makes sense to me.
May 4, 2015 at 9:35 am
Actually, when given the opportunity I would also create a separate file group for the data and make it the default file group. I would go even further if it could benefit the system by creating a separate file group for nonclustered indexes, another for indexed views.
I've done this as well. It's part of my template, but I didn't want to overcomplicate the example. But I haven't created databases that have become so large that the DBCC check doesn't run in an acceptable time. I have done work with large databases that I haven't created and the DBCC check is unacceptably long. In looking to break that out into filegroups, it's my understanding that file group backups require all objects for each table to be on the same file group. Do I understand this correctly? If so, I'm looking at having to change how I allocate objects for VLDBs.
If you have EE, I tend to agree that moving stuff out of the Primary file group makes some sense. However, for most of the databases and instances, I'm not sure it's worth the complexity that it creates for others that might be administering the system.
I've been setting up databases this way for a long time and don't find it makes things any more complicated. For databases I haven't created that haven't been set up this way, some things are much more complicated than they could have been had they been designed this way from the start.
May 4, 2015 at 10:12 am
RonKyle (5/4/2015)
If you have EE, I tend to agree that moving stuff out of the Primary file group makes some sense. However, for most of the databases and instances, I'm not sure it's worth the complexity that it creates for others that might be administering the system.
I've been setting up databases this way for a long time and don't find it makes things any more complicated. For databases I haven't created that haven't been set up this way, some things are much more complicated than they could have been had they been designed this way from the start.
The key there is "I've been". My experience is that lots of times I run into databases or I leave them and others struggle to deal with DR situations because they're confused by something that isn't necessary.
I agree it's more flexible, but I also think this is an easy change if you see an application growing. Moving things from one filegroup to another isn't that hard later.
May 4, 2015 at 10:26 am
It really comes down to "it depends." Can't say either is right or wrong.
May 4, 2015 at 11:17 am
My experience is that lots of times I run into databases or I leave them and others struggle to deal with DR situations because they're confused by something that isn't necessary.
Databases with this setup can always be backed up the same way as one that isn't. I agree with Lynn that this isn't strictly a right or wrong. But we are supposed to be the upper and even top tier database developers and administrators. If I were designing a database for a firm to small to have a DBA, I might not do this. But if the filegroup would become corrupt, I've made it harder to recover.
May 5, 2015 at 7:24 am
We've been caught by the default problems too. We now spell out some of the settings at the top of each of our stored procedures.
May 5, 2015 at 7:36 am
RonKyle (5/4/2015)
But if the filegroup would become corrupt, I've made it harder to recover.
Not necessarily. Depends on the type of corruption. If the system tables get hit, that's a full database restore. If it's a couple of pages, that's just a page restore. If it's in nonclustered indexes, that's a drop and recreate of the index.
Unless I'm certain that a DB I'm designing is going to get to hundreds of GB within a year and I'm happy that the DBAs who will be supporting the DB understand the additional complexity, I'll leave the DB as a single filegroup. Splitting up isn't that hard later (unless there are LOBs, they're annoying) and I don't want to be the one who creates a 24 filegroup database that reached 15GB at its largest.
And when I do split a DB up, I have a reason for splitting it, Could be performance, could be recoverability. Those dictate very different ways of spreading objects out so I want to know why I'm splitting a DB before I do it.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 5, 2015 at 7:47 am
RonKyle (5/4/2015)
--------------------------------------------------------------------------------
But if the filegroup would become corrupt, I've made it harder to recover.
Not necessarily. Depends on the type of corruption. If the system tables get hit, that's a full database restore. If it's a couple of pages, that's just a page restore. If it's in nonclustered indexes, that's a drop and recreate of the index.
OK. It's possibly made it harder to restore. Better to reduce the risk and maintain future flexibility in my view. The hesitation from the top tier DBAs is surprising to me. I assumed this was a fairly normal undertaking and in line with the intent of the article advising more planning and thought when creating a new database. As a minimum it can allow for the separation of tables and indexes to separate disks and doesn't prevent using standard backup and restore procedures.
Viewing 15 posts - 1 through 15 (of 28 total)
You must be logged in to reply to this topic. Login to reply