April 22, 2013 at 12:39 pm
Intro:
I have sql server installed on c drive but the logs (LDF) is stacking up and I am getting out of space.
Specs:
Let my Db name be Analytics and the file Is analytics.ldf
Current settings are: recover - full , autogrowth enable but restricted by 1mb.
Analytic.mdf is being used daily by me , so can I resolve this issue.
Question:
1)so what is best practice to make a backup of that log and data , and also reduce the space they are using.
2)if Taking offline is a possible solution , need instructions to get it safely back online.
3)Will my data get effected if I create new similar name files and delete previous one after backing them up?
solution:
April 22, 2013 at 12:56 pm
Please read through this - Managing Transaction Logs[/url] and http://www.sqlservercentral.com/articles/Transaction+Log/72488/
3)Will my data get effected if I create new similar name files and delete previous one after backing them up?
Other than the fact that it'll result in the database being marked recovery_pending and you needing to restore a full backup to get it accessible again, no.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
April 22, 2013 at 12:59 pm
Hello,
As your database is in the full recovery mode, the transaction log will not be truncated unless you perform a transaction log backup. When you backup the log, the log is truncated (bear in mind that this does not mean that the file will be shrunk).
The correct backup strategy really does depend on the individual database, how often is it updated? How much data can you afford to lose in the event of corruption?
Andrew
April 22, 2013 at 1:08 pm
My data gets updated daily once.
I need more space in disk and historical data changes is not necessary because i can use backup to create them back again ( correct me if I am wrong)
April 22, 2013 at 1:15 pm
If your database get updated once a day I would backup your database and log before the load, adjust the log to a size that is suitable (preferably avoiding autogrowth) and monitor whilst the data is running in.
If your transaction log is expanding during the load you can back it up during to manage it OR add more disk space.
Andrew
April 22, 2013 at 1:17 pm
aayushmail007 (4/22/2013)
My data gets updated daily once.I need more space in disk and historical data changes is not necessary because i can use backup to create them back again ( correct me if I am wrong)
If you do what you are proposing, you will leave your entire database unusable and will need to restore from backup (which will recreate the huge files) access the database again.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
April 22, 2013 at 1:20 pm
so in my case , I just need data to be restored by months but main emphasis it get space on my drive.
so which backup should I be using and instruction on how to do it be very helpful.
April 22, 2013 at 1:42 pm
My thoughts:
If you are updating this once and day, and it's your database, important, but not affecting others, here's what I'd do.
Change the database to simple mode. With updates once a day, no real reason to recover to a point in time (outside of the backup). If that's no the case, don't do this, but if you only care about recovering in a disaster to the time of the backup, this works. However before you do this, I'd run a log backup before my load/change, then load/change and run another log backup. This second one gives you an idea of how big your log should be.
Run a full backup before each load. You could do it after, but if the load fails or there's an issue, I'd prefer to have a recent backup and not assume that yesterday's backup is there. Copy this off the disk to another location immediately.
Now you should be able to shrink the log file to roughly what you found in step 1 above, with some pad. I'd only shrink the log here, not the data file. Use DBCC SHRINKFILE.
Ultimately you need space for the data, so you'll need that much disk space. I'd also recommend you read Gail's article so you understand what you're changing here.
April 22, 2013 at 1:49 pm
GilaMonster (4/22/2013)
Please read through this - Managing Transaction Logs[/url] and http://www.sqlservercentral.com/articles/Transaction+Log/72488/
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
April 22, 2013 at 1:52 pm
Thanks Steve and Gila.
if this is organization data what would you prefer ?
My first requirement will be to have space in drive
2nd recover when needed
April 22, 2013 at 2:18 pm
aayushmail007 (4/22/2013)
Thanks Steve and Gila.if this is organization data what would you prefer ?
My first requirement will be to have space in drive
2nd recover when needed
Those are not the priorities I'd have. That may lead to you not recovering at all.
What's the drive size, what's the data size, what's the log size. How much data import/change per day?
April 22, 2013 at 2:21 pm
C drive : 200gb
sql log: 140 gb
sql data : 30 gb
etc : 10-20gb
space left : less than 5gb
logs increments 1.5- 2gb a day
April 22, 2013 at 2:23 pm
If you're loading 1-2 GB a day in the log, I'd shrink the log to 5GB. Space is relatively cheap, and this gives you pad.
You don't need a 140GB log, and likely it grew because you didn't run log backups. Switch the db to simple mode, shrink the log. Schedule (don't run, schedule) a full backup every day.
April 22, 2013 at 2:28 pm
first I have to run log backup by taking it offline and then use backup option in SQL mgmt studio.
the challenge here is that the sql server is on remote desktop and I will be taking backup in another mapped drive,
but I cannot see that drive when I select disk . Only two options are C and D .
Cannot locate mapped drive
April 22, 2013 at 2:48 pm
You don't need to take anything offline to run a backup. If you take the database offline, you can't run a backup in SSMS.
As far as the mapped drives, the server doesn't see drives you've mapped in your session. The service account for the server has it's own mapped drives.
What I would say here is run the full backup. use a UNC path. Then switch to simple mode. This will generate a checkpoint and the log chain is broken. You should be able to shrink the log then.
Viewing 15 posts - 1 through 15 (of 25 total)
You must be logged in to reply to this topic. Login to reply