February 26, 2011 at 3:44 am
Ones again sorry for my question, but i need to clarify for me something.
In every blog/article, that i read, i see, that to be able to recover my DB's to some point time i need to perform Full Backup while DB's in Full Recovery mode and Transaction Log Backup too. Also i read, that i have to perform Transaction Log Backup, so Log files will not increased. I'm a little bit confused. When i do Transaction Log Backup, then Log files stay in same size without any change. Only when i Shrink DB after Transaction Log Backup, then Log files size really is being reduced.
My questions are:
1. Whether to be able to reduce Log files size i need first to do Transaction Log Backup and then to Shrink DB?
2. Why when i Shrink DB, while it in Simple Recovery Mode, Log file is being reduced to very small size(kb)? And when i Shrink DB, while it in Full Recovery Mode, so Log file is being reduced to not so small size(mb)? I guess it's because in Full Recovery Mode by Shrinking DB in Log Files stay not committed transactions, so they could be deleted, but i need to be sure i'm right or not.
Thanks!
February 26, 2011 at 4:08 am
You shouldn't really need to shrink your transaction log files; the database has worked hard to make them the size they are to accommodate the work that is being done in the database. If you shrink them and they start growing then they are not big enough to start with.
In Simple Recovery the log is cleared by SQL Server when certain background tasks have taken place. In Full Recovery the log is not cleared until these background tasks have taken place and a transaction log backup is taken. In both cases it is the internal space within the log file that is cleared, not the physical disk space and therefore the size of the file will not change.
If you do shrink a file it cannot shrink to a size that is smaller than the original size defined when the file/database was created. If there is log information in the file that has not yet been cleared then the file cannot be shrunk to a size smaller than a size required to accommodate that information.
February 26, 2011 at 10:26 am
DNA_DBA
Thanks for your replay, but i really didn't understand nothing.
I actually have two questions, as you can see and would like to get answers for them.
Thanks.
February 26, 2011 at 11:59 am
1) You really shouldn't be shrinking the database.
2) In simple recovery the log space is marked reusable (free) on checkpoint. In full recovery the log space is marked reusable (free) after a log backup. Hence to reuse log space you need to take log backups.
Log backups don't reduce the size of the log. They just make the space within reusable. You really shouldn't regularly shrink databases.
Please read through this: http://www.sqlservercentral.com/articles/64582/
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
February 26, 2011 at 12:23 pm
GilaMonster
Thanks a lot!
If i shouldn't do Shrink, so how Log file will be reduced?
February 26, 2011 at 1:38 pm
What we are saying is to almost never shrink them. Until the tlog is backed up (which marks the entries as 'available to be reused/overwritten'), it will constantly need more space and possibly grow. Then, if you shrink the file each day, it'll likely need to grow again the following day.
This auto growth should never happen during production hours (which it will do if it is shrunk last night), because it takes some time and hurts performance for users.
On top of that, it adds to physical file fragmentation. In fact, there are even additional bad results from constant shrinking and growing. Please read Gail's article for more details.
It is best to leave the file 1% used (ldf being big), then during the day, transactions begin to fill it to 20% or 30% and then a tlog backup will run to reclaim the used space so it is back to 1% used again. Meanwhile, the actual ldf file should remain a constant, large size to ensure no growths occur during the day.
Now, having said that, if your tlog backs were failing and the ldf is enormous and filling the drive, then yes, do the backup, then shrink ONCE to a smaller size, allowing the file to still be too big.
But it doesn't sound like that is your problem. It sounds like you just want a small ldf file and we are here saying that you should instead learn to like a large ldf file. Just get used to it. It's ok.
If you are low on disk space, then consider capping the max size of the initial ldf and create a 2nd ldf on another drive which can be used as preallocated overflow. This option has some recovery and performance impacts and probably shouldn't be done until you understand why.
Jim
Jim Murphy
http://www.sqlwatchmen.com
@SQLMurph
February 27, 2011 at 4:32 am
DNA_DBA (2/26/2011)
You shouldn't really need to shrink your transaction log files; the database has worked hard to make them the size they are to accommodate the work that is being done in the database. If you shrink them and they start growing then they are not big enough to start with.
sometimes one bad query will grow the log to a size it has never been so sometimes it is necessary to shrink the log. this is especially true in a data warehouse environment where some users do not know how to create optimized queries.
February 27, 2011 at 5:43 am
dixon_dwayne (2/27/2011)
DNA_DBA (2/26/2011)
You shouldn't really need to shrink your transaction log files; the database has worked hard to make them the size they are to accommodate the work that is being done in the database. If you shrink them and they start growing then they are not big enough to start with.sometimes one bad query will grow the log to a size it has never been so sometimes it is necessary to shrink the log. this is especially true in a data warehouse environment where some users do not know how to create optimized queries.
But in a DW environment the users should only be writing SELECT queries, so you shouldn't have to sweat the log. In fact, as much as possible, I'd mark the databases as read only in order to eliminate locking from the process, elt alone log entries.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
February 27, 2011 at 8:24 am
[Jim].[dba].[Murphy]
Thanks a lot! Really thanks.
I've understood now what exactlly Transaction Log does and why do not perform Shrink.
The second thing what did you mean by: "it is best to leave the file 1%"? Did you mean to set it or it's only example?
Thanks.
February 27, 2011 at 9:40 am
No problem. Glad to help.
The 1% comment was just an example. I was alluding to ensuring that your log file was mighty large - much larger than you think is needed so that occasional huge amounts of data modifications in a short period of time, would not trigger a growth here and there.
Auto growth should only be used as a safety net. So try to never let the system need to auto grow. If my databases ever auto grow, I slap my own hand and figure out what I did wrong, or overlooked which caused me to miss that reason for the growth. I take this with great seriousness - almost as much as not running the drives out of space.
Physical refracting the hard drives is a pain in the butt and requires shutting down SQL services for many hours and copying data files off and back on the raid set. It is usually very hard to fine a hard-down window throughout the year.
When I do have a chance for this kind of maintenance, I grow the files out (data and log) quite a bit to avoid an auto growth for an entire year (if I have the disk space), then 'defrag the drives' (by copy off/on the drives). This removes all of the physical fragmentation and my regular reindexing removes all of my logical fragmentation. Well, the fragmentation of an index within the data file.
If you have not done this in a while, then the system performance can suffer all over because of double fragmentation where and index is fragmented within the data file and needs to 'skip around' in the data file to read the records, and the reads may cause additional skipping since the underlying rows to be read may be in separate physical smaller pieces of the fragmented data file all ofpver the platters on the drives. This is bad.
Jim
Jim Murphy
http://www.sqlwatchmen.com
@SQLMurph
March 2, 2011 at 9:49 am
[Jim].[dba].[Murphy]
First of all - Thanks a lot for you patience!
Second of all it was for me quite strange to hear, that "Auto grow" it's not so good thing. But after a lot of researching in net, i've found, that it cause bad performance and big difragmentation of DataBase files.
Thanks again!
March 2, 2011 at 10:11 am
No problem. I know it does sound strange, but auto grow is great - only as a last resort. Auto shrink is terrible and should be removed from SQL Server entirely (my opinion).
Jim
Jim Murphy
http://www.sqlwatchmen.com
@SQLMurph
March 3, 2011 at 12:21 am
[Jim].[dba].[Murphy]
Thanks again.
I will most probably open Posts here on regular basis, so be ready:-)(kidding).
May 30, 2011 at 9:38 am
I require some clarification. I run full backup every day @ 12:01AM. Diff Backup at 6hrs interval and tran backup hourly.
My last tran backup was at say 11AM. At 11:35AM, my database gets corrupted and I have restored tran backup upto 11AM.
Now what? Is there any chance of getting any part of the data of those 35 mins.
I tried to take tran backup at 11:35 after I sensed DB corruption but that backup didn't happen.
May 30, 2011 at 9:41 am
Please post new questions in a new thread. Thank you.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 15 posts - 1 through 14 (of 14 total)
You must be logged in to reply to this topic. Login to reply