June 29, 2005 at 10:23 am
Stupid question here.
What kind of hardware can write 200 GIG of data to the disk in the log (not to mention to the db) in only 2 minutes?? I know hardware has gotten faster but that just can't be.
Is it possible that the file is not truncated at all after the backup is taken and that there's only a few mb of new log in there?
June 29, 2005 at 10:41 am
Again, you really should check this out:
"Dynamically shrink the transaction log with a stored procedure"
http://codetempest.com/node/10
It shows you how to create a stored proc in your master database that you can then schedule as a regular job to keep tranaction log size under control
G. Milner
G. Milner
June 29, 2005 at 12:32 pm
Sufian,
Don't know if you got a satisfactory answer to your question or not.
If I am reading this correctly, you have a 200 GB log file that you back up every two minutes. I believe you are saying that the physical size of the log is 200 GB but the content is quite small and able to back up in less than two minutes.
I was wondering if the log file you are wanting to delete is the primary log file or not. If it is not, you can easily add another log file and then delete the secondary log file. If it is the primary log file, you can add another log file, but you will not be able to delete the primary log. I believe, the best you could do at this point is follow what everyone else is saying and shrink the log file.
Hope this helps.
June 29, 2005 at 8:38 pm
Thanx ,
I think i have to stop here because every one says shrink shrink.
Hey, i can do shrinking logfile i tried on my test server on which 20 implementer are connected and for one day i connected my all users 1200 users and execiuted the shrink command with truncate.
I had taken the backup 3.00am in the morning after that till 1 pm my all users worked with full load and at 1:10 i executed the command and i lost data for 2 mins .because i am talking transaction log backup every 2 min. In two mins 500 user inserted new record 200 user updated data . Now what about shrink on real production server.
I had added onelog file and trying to switch logfile forcefully.If u all get time then pls study and try practically.
As i will do i will post it on site.
Thanx for all of u.
sufian
June 29, 2005 at 8:39 pm
Thanx ,
I think i have to stop here because every one says shrink shrink.
Hey, i can do shrinking logfile i tried on my test server on which 20 implementer are connected and for one day i connected my all users 1200 users and execiuted the shrink command with truncate.
I had taken the backup 3.00am in the morning after that till 1 pm my all users worked with full load and at 1:10 i executed the command and i lost data for 2 mins .because i am talking transaction log backup every 2 min. In two mins 500 user inserted new record 200 user updated data . Now what about shrink on real production server.
I had added onelog file and trying to switch logfile forcefully.If u all get time then pls study and try practically.
As i will do i will post it on site.
Thanx for all of u.
sufian
June 29, 2005 at 10:24 pm
So what is the exact syntax you used for the shrink command?
--------------------
Colt 45 - the original point and click interface
June 30, 2005 at 1:45 am
Sufian,
As a general remark: This forum is a peaceful place where people are coming for help and the members are helping by their experiences.
So don't be agressive with us!
Here everybody is trying to help you with the problem you have described the best we can.
And yes! As you've described the problem, (space problem) the solution is shrink, shrink, shrink. But maybe there is something what we do not see in your environment.
A lot of us are running databases which are far bigger then your's, for example I'm running some SAP databases of almost 1TB each with 1300+ users and we are able to manage the log file size without backing up every 2 mins. the log.
The way you can increase the by this size could be done by some mass inser/update/delete, or indexdefrag so such things. Because in the real life the users cann't insert/update the data faster then 2/3 sec/user (and I think this is even closer to 10 sec). By calculating operation per user per sec you could have 700 users x 60 sec x 2 min = 84,000 transactions between 2 log backups.
Without knowing your app I would say that this activity should not generate 200GB of log.
Anyhow we are here to give you advices, the final decision is always yours
Good luck
Bye
Gabor
June 30, 2005 at 4:52 am
Hi,
seems to me there are some misunderstandings under way:
first:
_never_ use BACKUP WITH TRUNCATE_ONLY on a system you want to be able to restore. That one is only for certain emergencies.
Do you backup the transaction logs with truncate or with no_truncate?
you should use BACKUP WITH TRUNCATE at least each time before you do a full backup, and eventually each time before the diff backups.
That way you never lose any transaction!
regards karl
Best regards
karl
June 30, 2005 at 7:50 pm
That is how Remi u posted 3133 post good carry on.
June 30, 2005 at 7:57 pm
Hi gabor,
Thanx for ur suggestion.But i think this forum help peoples question by real answer and if the question is far out of the box then no body should suggest to read books online
By the way my question was not so difficult if u read my question then u will see that i asked how to switch from one logfile to other.I havent asked for shrink.
AS in oracle i do alter database switchlogfile by takaing the first logfile offline.
is there any thing like that in SQL.
secondaly i do shrink onmy production server but for testing i did'nt shrink for one week and my log grow upto 200 GB
Thanx
from
sufian
July 2, 2005 at 8:34 pm
sufian,
you should stop offending other people, especially those who was trying to help you, and especially Remi! Manners, dude. Learn them. Might be helpful one day. (yes, I got an extra point for this post!)
July 5, 2005 at 1:03 am
Sufian,
There is NO direct equivalent of the Oracle feature to switch logs. By default each db has a single log file that is used in a circular fashion. When you backup the log the space in the log file is made available for re-use (with some restrictions caused by open transactions). Thus regular log file backups will both make your recovery OK, and prevent uncontrolled log file growth, So if you want to make the physical file smaller then shrink is the way to go (DBCC SHRINKFILE). If you want to move the log somewhere else then you can add an extra logfile in the new location, and after a while you can then shrink the original. However you CANNOT remove the original completely (at least not whuile the db is in use), although you can shrink it right down.
DBCC shrinkfile does not in my experience cause data loss and as several have said is precisley what you need to reduce the log file size. The truncate_only option of backup log should be avoided on a production system as it means the logs are useless for point in time recovery until the next full db backup. Best place for a detailed description of how all this works is "Inside SQl Server 20002 by Kalen Delaney. Also just to check we are talking about the TRANSACTION log here and not the ERROR log? Although if the error log got that big I'd be very concerned.
MIke
July 5, 2005 at 2:19 am
Very nice summery, Mike
Bye
Gabor
July 5, 2005 at 3:46 am
But Mike what about his other post where he claims to have found the solution and wants to know if he should post it?
http://www.sqlservercentral.com/forums/shwmessage.aspx?forumid=120&messageid=196219
--------------------
Colt 45 - the original point and click interface
July 5, 2005 at 6:31 am
I would certainly like to see it, but as far as I know the only way to change a log file to a different physical file is backup and restore (with MOVE option) or detach and re-attach. Both of which take the db offline for a while. If there is another way I have missed i would like very much to see it - always willing to be wrong!
Mike
Viewing 15 posts - 16 through 30 (of 107 total)
You must be logged in to reply to this topic. Login to reply