March 10, 2011 at 11:52 am
GilaMonster (3/10/2011)
[Jim].[dba].[Murphy] (3/10/2011)
A log truncation could have bad side effects if any part of that database is used by others; rows being modified in some way.A log truncation runs each time the log is backed up in full/bulk logged and at each checkpoint in simple. It doesn't break the DB, it has no consistency effects. It just breaks the log chain (in full/bulk-logged)
Gail - right; exactly the side effects I was talking about. I didn't mean to allude to data getting damaged, I as alluding to the fact that if another user modified data at the same time, then the tlog chain is broken in a way where more than your own changes are no longer recoverable, now other peoples changes aren't (no point in time). This will require a full backup after the loading process to start a new chain, which has its own impact. I was trying to focus on that point that performing a WITH TRUNCATE_ONLY should only be used in production at a time and in a mode when other users are not on the system. OR, understand the implications of said decision.
I should have been more clear.
Jim
Jim Murphy
http://www.sqlwatchmen.com
@SQLMurph
March 10, 2011 at 12:28 pm
[Jim].[dba].[Murphy] (3/10/2011)
mmmm. Not sure about that. A log truncation could have bad side effects if any part of that database is used by others; rows being modified in some way. Can possibly loose recoverability.And shrinking the log is not a great idea in most cases; lots of side effects from physical file fragmentation on disk to VLF's ever growing in number.
Jim
I'm suggesting the log truncation / shrink method for extreme cases when someone is perhaps reloading a datawarehouse and where the transaction log growth threatens to fill available disk space. I said millions of rows, but I really meant more like billions. The log wouldn't necessarily need to be truncated down to 1 MB, it can be shrunk down to a few GB or whatever is appropriate to keep it under the threshhold. When you're doing something like a planned one-off or once yearly reload on a datawarehouse, issues like recoverability and contention on other users is not an issue.
I don't do this often, last time was a few years ago, but from what I recall, there was some technical reason why checkpointing with simple recovery wasn't working, and I had to fall back to truncating the manual way.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
March 10, 2011 at 12:45 pm
Eric,
Sweet. That's why I wasn't condeming your actions, just warning about some of the consequences. There is always an exception to a rule and I'm glad you know the benefits and risks on both sides of the spectrum.
Jim
Jim Murphy
http://www.sqlwatchmen.com
@SQLMurph
March 10, 2011 at 12:53 pm
[Jim].[dba].[Murphy] (3/10/2011)
Gail - right; exactly the side effects I was talking about. I didn't mean to allude to data getting damaged
Cool. There's just so many myths and so much misinformation on the log, I don't want any more starting because someone read something and drew an incorrect conclusion.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 10, 2011 at 1:21 pm
GilaMonster (3/10/2011)
There's just ... so much misinformation on the log, I don't want any more starting because someone read something and drew an incorrect conclusion.
I couldn't agree more. That's one valuable role you play here. You can spot an incorrect statement or an unclear point like a hawk. GilaHawk. It is because of folks like you that makes this a valuable resource for correct information. KEEP IT UP!
I suffer from simplifying my words to the degree needed for what I guess my audience knows, which naturally adds ambiguity. In smaller groups of a few hundred in person or in a webinar, I get visual feedback to see how the info is received, which I think is a nice teaching quality. But with a ton of people, there seems to be too much wiggle room for misunderstanding; especially when I'm blind.
So you raise a great point. The audience is not just the folks speaking, it is many more times that in people 'listening in', yet providing no feedback. I'll keep working at it to try to strike an effective balance.
Jim
Jim Murphy
http://www.sqlwatchmen.com
@SQLMurph
March 10, 2011 at 7:58 pm
Heh... everytime I see someone ask "How do I avoid logging", I want to answer "Stay out of the woods." 😛
--Jeff Moden
Change is inevitable... Change for the better is not.
March 11, 2011 at 9:25 am
Generally speaking, transaction logging is trivial during the normal day to day operations of a OLTP database, especially if the log files are on separate storage from data files.
If an application has something like very large reference tables that need to be periodically mass updated or bulk inserted, then one option is to let a separate staging server do all the heavy lifting. Conain the reference tables in a seperate database. The staging server can work on upgrading it's own copy of the reference tables while the previous tables remain attached to the production server. Once the upgrade process is complete, simply take the reference database on the production server offline, detach the previous data files, and then attach the upgraded version of the data files. The staging process may run all night long and into the next day, but operations on the production server won't be affacted. The final step of attaching the new data files to the production database should take only a few minutes.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
March 11, 2011 at 5:06 pm
Eric M Russell (3/11/2011)
Generally speaking, transaction logging is trivial during the normal day to day operations of a OLTP database, especially if the log files are on separate storage from data files.If an application has something like very large reference tables that need to be periodically mass updated or bulk inserted, then one option is to let a separate staging server do all the heavy lifting. Conain the reference tables in a seperate database. The staging server can work on upgrading it's own copy of the reference tables while the previous tables remain attached to the production server. Once the upgrade process is complete, simply take the reference database on the production server offline, detach the previous data files, and then attach the upgraded version of the data files. The staging process may run all night long and into the next day, but operations on the production server won't be affacted. The final step of attaching the new data files to the production database should take only a few minutes.
I just swap synonym targets to do the same thing.
--Jeff Moden
Change is inevitable... Change for the better is not.
March 11, 2011 at 5:58 pm
So after the staging server has finsihed updating it's copy of the database, you do what with the synonymns on the production server? Point your queries to a different database?
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
Viewing 9 posts - 31 through 38 (of 38 total)
You must be logged in to reply to this topic. Login to reply