January 23, 2012 at 2:32 pm
I am in a new job.. and they use 'dbcc shrinkfile' in a loop statement to shrink all the log files in the installation, and they use a job to do it every hour. Their back up is 'simple' ... I have never seen this level of shrinking... but can not find any statements about this from Microsoft. Am I off base...
dwcp
January 23, 2012 at 2:43 pm
I would ask why they are shrinking the log when the databases are in simple recovery mode. Start there...
http://msdn.microsoft.com/en-us/library/ms189275.aspx
Jared
CE - Microsoft
January 23, 2012 at 11:04 pm
genarally the dbcc shrinkfile command is use only to shrink the files when desried... say suppose the file length > 1GB(eg)
January 23, 2012 at 11:10 pm
Please read the article below for more information:
http://www.sqlservercentral.com/articles/Administration/64582/
January 24, 2012 at 7:51 am
Thanks for your imput.. they are trying to keep Very small log files. I guess the forecasting software must create batches of large changes, when someone re-forecasts or makes a copy of the current forecast, and they want the file shrunk back to normal..
dwcp
January 24, 2012 at 9:32 am
dwilliscp (1/24/2012)
Thanks for your imput.. they are trying to keep Very small log files. I guess the forecasting software must create batches of large changes, when someone re-forecasts or makes a copy of the current forecast, and they want the file shrunk back to normal..dwcp
Why do you want to shrink the transaction logs if normal operation causes them to grow again?
January 24, 2012 at 10:53 am
Lian Pretorius (1/24/2012)
dwilliscp (1/24/2012)
Thanks for your imput.. they are trying to keep Very small log files. I guess the forecasting software must create batches of large changes, when someone re-forecasts or makes a copy of the current forecast, and they want the file shrunk back to normal..dwcp
Why do you want to shrink the transaction logs if normal operation causes them to grow again?
Maybe because they have a lot of log files and they don't all grow at the same time? So regularly shrinking the log files enables them to work with much less space than they could have without it, provided they shrink often enough that only a small proportion of the log files will have grown since the last shrink.
It does seem rather strange though. I don't know any workloads that would produce such an effect. And I can't think of any other reason for frequent shrinking that would make sense.
Tom
January 24, 2012 at 11:43 am
L' Eomot Inversé (1/24/2012)
Lian Pretorius (1/24/2012)
dwilliscp (1/24/2012)
Thanks for your imput.. they are trying to keep Very small log files. I guess the forecasting software must create batches of large changes, when someone re-forecasts or makes a copy of the current forecast, and they want the file shrunk back to normal..dwcp
Why do you want to shrink the transaction logs if normal operation causes them to grow again?
Maybe because they have a lot of log files and they don't all grow at the same time? So regularly shrinking the log files enables them to work with much less space than they could have without it, provided they shrink often enough that only a small proportion of the log files will have grown since the last shrink.
It does seem rather strange though. I don't know any workloads that would produce such an effect. And I can't think of any other reason for frequent shrinking that would make sense.
I can give a scenario form my previous company. Once a month we would import all of our expenses into Great Plains via a table import. This was 65k-70k rows at the time, but the company continues to grow on a monthly basis. These imports would cause the transaction log to grow each month. I do not believe that Great Plains was using a minimally logged operation to insert these rows. So, because it was not in the budget for extra disk space, the transaction log was "shrunk" when it became a problem. This was also before I really knew anything about t-logs.
Jared
CE - Microsoft
February 5, 2012 at 9:33 pm
I can see if you want to make a copy of the database after a forcast is posted you don't want large transaction log, however if this forcasting process is occurring frequently you should really consider leaving the log file to its appropriate size so you don't sacrafice using more IO when your log just needs to grow again.
Is it possible for the for casting to use a checkpoint clause so that the vlfs which compose the log file are marked for reuse to cut down the growth?
February 6, 2012 at 6:33 am
I still have not been able to talk to my boss about this.. except in passing. I still think that he is over doing the shrinking of log files. The mounted drive has a size of 136GB with 47GB in use. 13 log files are stored on this drive, with each about 1MB in size but they seem to grow (and shrink) to 14MB. However this morning we have two that are 12G in size.
In addition this drive also holds all the log files for the jobs running on the SQL Server. (T-SQL code output during run time) Needless to say these log files are using most of the space.. but we are not even using 50%.
February 6, 2012 at 6:42 am
That is horrible mismanagement of the log.
Please read through this - Managing Transaction Logs[/url]
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply