December 10, 2010 at 7:58 am
Personally I'd definitely go with keeping it but take it out of the maintenance plan.
In terms of maintenance plans, I remember when I first started administering SQL and came across maintenance plans. You run through the wizard and the whole feel is that these are a collection of recommended actions you SHOULD be taking, and since you don't know much about what they do the temptation is to check them all. Fortunately while a newbie dba I was a fairly experienced sysadmin, so had already gained a healthy level of paranoia and read up on what each of the options actually did, but I bet a lot of people don't!
If you take it out of the wizard but keep it as a T-SQL command you stop many people from using it, since they'll either 1) be too scared to mess with T-SQL, or 2) know enough to only use it correctly. Equally, if you need to schedule it then fine, you can still do that without it being part of the wizard. It's hardly a tricky job to create a job that executes the T-SQL periodically, or even edit the auto-generated maintenance plan to add an additional T-SQL job in the appropriate place.
There are definitely many valid reasons for using it though. I can think of two situations I occasionally need to. 1) I manage a large database for a client who periodically asks us to backup and restore the content to a training / testing copy of the database, which they then use occasionally for new staff, trying out new things etc. The live database's transaction log normally sits at about 50GB, but there's no justifiable reason for us to use up that much storage on a testing db. 2) Same client but with the live db, as I mentioned the transaction log normally sits at around 50GB, and that is plenty to ensure that it almost never needs to grow regardless of how busy it is. However, now again something will happen, a code issue, a massive data import, or worst of all the log backups failing to work for some reason. Since the transaction log file never changes in size I have a monitor watching it which alerts me if it ever grows (in addition to monitoring the backups themselves), and since we generally have about 40GB available on that drive there's plenty of time to resolve any issue that has occurred. Once the issue has resolved I'll generally shrink the file back to the 50GB mark, and regain my 40GB buffer.
The SSD question is certainly an interesting one, no idea of an answer but it's certainly interesting. Goes back to the thread recently about absolute certainties in SQL recently!
Completely agree with CavyPrincess about SIMPLE being the default for similar reasons to what I said about maintenance plans (as well as those posted by others). Experienced DBAs will know to make the change, newbie DBAs without the benefit of guidance won't, but then they also wouldn't know the importance of transaction log backups!
December 10, 2010 at 8:04 am
GilaMonster (12/10/2010)
pjdiller (12/10/2010)
Also, what about truncate only?It's only valid when shrinking the data file. If you shrink the log and specify that option, it's ignored. It's the same as doing just a shrink file on the log with no targetsize set.
Again from Books Online:
target_size
Is the size for the file in megabytes, expressed as an integer. If not specified, DBCC SHRINKFILE reduces the size to the default file size. The default size is the size specified when the file was created.
TRUNCATEONLY
Releases all free space at the end of the file to the operating system but does not perform any page movement inside the file. The data file is shrunk only to the last allocated extent.
target_size is ignored if specified with TRUNCATEONLY.
TRUNCATEONLY is applicable only to data files.
I am not in a position to disagree. But I am confused by the behavior in SSMS. In the GUI, when I choose to shrink a file, choose log, and select "Release Unused Space", it scripts this:
DBCC SHRINKFILE (N'my DB logfile' , 0, TRUNCATEONLY)
This seems to work as well. Since it always works better immediately after a log backup or checkpoint, depending on recovery model. And it's nearly immediate regardless of size.
December 10, 2010 at 8:21 am
What would be left if we deprecated everything that seems buggy when mis-used?
I agree with the comments already made by Hugo and Gail. They are close to what I would have typed.
Make it harder to do this like truncating a transaction log on SQL 2008. This should leave the utillity available for use to resolve real world issues by an experianced DBA, but remove it from being easily selected by the untrained that are unaware.
Unfortunately, this opens another can of worms:
What happens when you hide an application feature that a Novice was using somewhere that requires him to make a $250.00 M$ support call to discover when they really need it?
deprecating something instead of repairing a bug is IMO a serious cancer like desease. I have seen way to many good utillities die just because nobody took the time to learn how to use them correctly and made incorrect assumptions without testing or reading documentation and then reported this "unexpected behavior" as a BUG. Just today I recieved a BUG report because the tester did not read the installation instructions he had written and installed the software incorrectly.
December 10, 2010 at 8:40 am
CavyPrincess (12/10/2010)
My opinion - what needs to be "fixed" is the default option of FULL on model. There are a lot of non-DBAs supporting SQL Server. Databases are set to FULL, there are no logdumps, the log file grows, so the only way they know to fix it is to run a weekly shrink job. As the "new DBA" here, I'm seeing it all too frequently as I start looking into the departmental SQL Server instances.
I agree with the suggestion to change the default recovery model for the model database to SIMPLE.
The argument that "accidental DBAs" use a weekly shrink job to control the size of transaction log files is a bit strange - I can see them trying to do that, but not scheduling it each week, as they'll quickly find out that shrinking the transaction log won't work in their case.
December 10, 2010 at 8:43 am
matthew.stone (12/10/2010)
I say keep the functionality and not a trace flag as I don't what to have to bounce a server to enable.
Luckily, you can enable trace flags without restarting the server. DBCC TRACEON. It's even documented: http://msdn.microsoft.com/en-us/library/ms187329.aspx
That being said, I agree that this should not be behind a trace flag. I think that removing the shrink task from the maintenance wizard, the shrink option from the GUI, and the autoshrink option completely suffices. The T-SQL syntax can remain unchanged.
December 10, 2010 at 8:46 am
Thanks Hugo. I remember that one now :). So much to know.
December 10, 2010 at 8:55 am
pjdiller (12/10/2010)
I am not in a position to disagree. But I am confused by the behavior in SSMS. In the GUI, when I choose to shrink a file, choose log, and select "Release Unused Space", it scripts this:
DBCC SHRINKFILE (N'my DB logfile' , 0, TRUNCATEONLY)
So management studio writes bad code. Tell me something I don't know. SSMS puts tape-specific options when scripting backup to disk. Doesn't mean that the tape-specific options validly apply to disk backups.
This seems to work as well. Since it always works better immediately after a log backup or checkpoint, depending on recovery model. And it's nearly immediate regardless of size.
Yes? I didn't say it'll throw an error, just that it's ignored. It's the same as running
DBCC SHRINKFILE (N'my DB logfile' , 0)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
December 10, 2010 at 9:10 am
SanDroid (12/10/2010)
Unfortunately, this opens another can of worms:What happens when you hide an application feature that a Novice was using somewhere that requires him to make a $250.00 M$ support call to discover when they really need it?
SQL comes with very good documentation. If someone's willing to make a $250 call instead of spending 5 minutes searching google or Booke Online, that's their problem.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
December 10, 2010 at 9:21 am
GilaMonster (12/10/2010)
SQL comes with very good documentation. If someone's willing to make a $250 call instead of spending 5 minutes searching google or Booke Online, that's their problem.
Actually it is a problem of most M$ lawyers and the M$ marketing team.
This exact issue was part of the discussion about depricating and then removing completely the ability to truncate a full transaction log from the "Backup Log" utillity for SQL 2005 and beyond.
Which IMO seems very similar to this.
Not really anything I would loose sleep over, but something I know M$ will think of this before they completely drop someting.
"It has all happend before, and it will all happen again."
December 10, 2010 at 9:36 am
Yes? I didn't say it'll throw an error, just that it's ignored. It's the same as running
DBCC SHRINKFILE (N'my DB logfile' , 0)
Ultimately, my question is whether that is evil.
December 10, 2010 at 9:39 am
pjdiller (12/10/2010)
Yes? I didn't say it'll throw an error, just that it's ignored. It's the same as running
DBCC SHRINKFILE (N'my DB logfile' , 0)
Ultimately, my question is whether that is evil.
I would say that 99.999% of the time shrinking a Transaction Log file is evil....
Is that the answer you seek?
December 10, 2010 at 9:55 am
SanDroid (12/10/2010)
pjdiller (12/10/2010)
Yes? I didn't say it'll throw an error, just that it's ignored. It's the same as running
DBCC SHRINKFILE (N'my DB logfile' , 0)
Ultimately, my question is whether that is evil.
I would say that 99.999% of the time shrinking a Transaction Log file is evil....
Is that the answer you seek?
I was expecting that chopping off the end of a logfile was not detrimental. No shuffling required, no index fragmentation. No problem.
December 10, 2010 at 10:55 am
GilaMonster (12/10/2010)
SanDroid (12/10/2010)
Unfortunately, this opens another can of worms:What happens when you hide an application feature that a Novice was using somewhere that requires him to make a $250.00 M$ support call to discover when they really need it?
SQL comes with very good documentation. If someone's willing to make a $250 call instead of spending 5 minutes searching google or Booke Online, that's their problem.
I agree, besides, however much of a novice someone is if they know something existed before it doesn't take much for them to Google it, and then invariably come across a host of discussions like this one saying how bad it is. The point of hiding it is to prevent those who didn't even know it existed from blindly selecting it as part of a maintenance plan, since why would MS include it as part of the wizard if it wasn't a good idea to use it.
Surely one of the main benefits of any wizard is that it helps guide the unaware down a relatively safe path, and minimises the chance of them screwing things up.
December 10, 2010 at 11:04 am
As a manual process for very specific reasons (such as Gail mentioned), shrink should remain. I think it needs some fixing done but it should stay in SQL Server. Go ahead and remove autoshrink though.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
December 10, 2010 at 11:04 am
I run shrink on our Dev server after running lots and lots of queries to reduce the transaction log size (on disk). It's easy to grow a log file to 50 GB when running lots of big queries, and I have filled a drive in the past.
So long, and thanks for all the fish,
Russell Shilling, MCDBA, MCSA 2K3, MCSE 2K3
Viewing 15 posts - 31 through 45 (of 71 total)
You must be logged in to reply to this topic. Login to reply