October 22, 2007 at 10:55 pm
Hi
I am facing 2 issues both releated to each other i think.
I have an ETL procedure that is eating up my disk space. When the ETl begins I have 12 Gb of space left , but within a few minutes it goes down to 10 MB. The same etl never used to take more than 2 gb of harddrive space previously.
Also I tried doing a shrink file and Shrink database to see if I could free up some space but the log file size remains at 10 gb even after shrink file command. Also tried doing a DBCC checkdb. I am also doing a defrag of the system hard drive just to sure.
Thanks in advance 🙂
October 23, 2007 at 12:59 am
your DB is at full recovery mode. If you don't have enough disk space then i will recommend to make it BULK & then finally Simple. I hope this helps.
October 23, 2007 at 1:01 am
Is the DB in full or bulk-logged recovery mode?
If so, do you have log backups runnng? If you're in full recovery mode and have no log backups, the log will never be truncated and will keep growing. You won't be able to shrink it until you take a log backup.
Do a select from sys.databases and check the column named log_reuse_reason (or soemthing similar). That will tell you why the log can't shrink.
If you still need help, tell me what that column reads for the DB in question,
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 23, 2007 at 4:26 am
October 23, 2007 at 4:33 am
Yup. That's correct.
If you don't need to be able to make point-in-time restores, then you can change to simple recovery mode. In simple the log auto truncates on checkpoints.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 23, 2007 at 7:03 am
Well my first advice be to have a look at the proc and see why it is making the log grow so much. Also if the proc will require 10 gb of space whenever it executes then it will begood to leave the transaction log like that and not to shrink it, beacuse if the log grows everyday to that size and you shrink it then it will lead to fragmentaion.
October 23, 2007 at 8:09 am
Better try to find out the reason as Maverick said ,If ur sole aim is to shrink the log the best way is
change ur database recovery model to SIMPLE
then run dbcc shrinkfile (log_file_name,size)
It is safer to take a backup before doing so.
October 23, 2007 at 12:08 pm
Also I recommend changing your ETL process so that it happens in batches - then you can avoid having the log grow so much whenever the process runs. Depending on your recovery requirements, moving to SIMPLE recovery mode may not be the best thing for you.
See Search Engine Q&A #1: Running out of transaction log space for more info.
Paul Randal
CEO, SQLskills.com: Check out SQLskills online training!
Blog:www.SQLskills.com/blogs/paul Twitter: @PaulRandal
SQL MVP, Microsoft RD, Contributing Editor of TechNet Magazine
Author of DBCC CHECKDB/repair (and other Storage Engine) code of SQL Server 2005
October 24, 2007 at 12:12 am
Ratheesh.K.Nair (10/23/2007)
Better try to find out the reason as Maverick said ,If ur sole aim is to shrink the log the best way ischange ur database recovery model to SIMPLE
then run dbcc shrinkfile (log_file_name,size)
It is safer to take a backup before doing so.
However, if you have a need to restore without losing data in the case of a disaster, you should NOT change to simple, but rather do regular log backups.
It depends on the business requirements.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply