March 17, 2010 at 11:20 am
One of our clients whose sql server we administer runs some processing which makes tempdb grow very large.
They're not bothered about point in time recovery and are quite happy with only having the nightly full backups to work with, if necessary.
They seem to be under the impression that changing the recovery models of their databases to Simple would have a positive effect on tempdb's growth.
I can find no information to corroborate such a statement to connect tempdb to recovery models like this. Surely all that will be affected is the growthof the transaction log?
Any pointers would be very much appreciated.
March 17, 2010 at 11:24 am
The reason you're having trouble finding data on how recovery models affect tempdb growth is because there's no relationship at all between the two things.
If all you're doing is nightly full backups, then the databases should definitely be in Simple mode. Otherwise the log file will just keep growing and growing, or you'll have to truncate them manually. Simple mode is designed for that kind of scenario and makes it much easier to manage.
The only advantage to manual truncation is that you could recover to point-in-time if there was a situation that crashed the data files but left the log files intact. Since that's not needed per this business, and loss of a day's data is acceptable, there's no reason to do the extra work for that. Just put them in Simple mode and leave it at that.
But that won't affect tempdb growth. No connection between the two.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
March 17, 2010 at 11:33 am
Thanks GSquared, exactly the answer I was expecting!
Heh, how often does that happen...?!
We currently do nightly full backups and tranlog backups as our standard server set-up for all sql servers we look after, unless there's a particularly compelling reason to follow a different approach. So we ideally didn't want to change this client's set-up from the standard without good reason, and I see none now
Thanks again
March 17, 2010 at 11:39 am
Makes sense. I'd just go with your standard in this case, unless something happens like log file growth causes storage or performance problems.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
March 17, 2010 at 11:49 am
March 17, 2010 at 12:05 pm
You're welcome.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
March 18, 2010 at 4:30 am
mazzz (3/17/2010)
They seem to be under the impression that changing the recovery models of their databases to Simple would have a positive effect on tempdb's growth.
Do they know what tempdb is for and how SQL Server uses it?
It seems this is a perfect moment to educate your client about how bad coding/administrative practices can cause problems with tempdb growth. Also, to tell them about other things that cause tempdb growth.
Does your client do any coding (even adhoc) against these databases? What are their concerns with tempdb growing?
If they don't touch the db at all and are only seeing growth, it's worth discovering the source of their concerns and running Profiler and other diagnostics against the server just to make sure everything really is fine before you re-assure them.
If they're noticing this growth because they can write queries against SQL Server (the more likely situation), then look at their code! Chances are they've engaged in poor coding construction / practices that is causing tempdb to grow exponetially (using SELECT * against a 10 million row table joined to a 7 million row table, for example) or writing huge queries that can be broken up into smaller processing chunks or even using temp tables poorly.
Just because you have your answer from the other respondents, don't just go back to your client with that. Because this issue will continue to bother them unless you find the source of the concern and address that. They just won't tell you it's bothering them until it's too late to recover their server or solve their problems.
March 18, 2010 at 4:37 am
Thanks Brandie
I've been running a trace since 10am, they are running a process that they think might be what caused the rapid expansion last time
I am also in conversation with them, trying to ascertain what this process does, and whether it is one of their own or provided by a 3rd party
March 19, 2010 at 8:34 am
I have to take acception to one of the comments above. The selection of simple vs full is not made based on when you do a full backup. It is made on recovery of the data. If I want to be able to recover my data to the point of failure, then I need to be in full recovery mode. Otherwise the transactions after the backup was completed will not be recoverable. Also the transaction log even in simple mode can grow very large depending on the type of transactions that are being applied. For example, batch updates and table updates all write to the TLOG and once they are committed then can be overwritten. A 1 GB table with an update to one column can use 2 GB of TLOG depending on how the updates are being performed.
Just had a developer do that to me. He was reading the values, making a minor change, but updating all the columns in his update statement. Only the table was 100 GB with over 400 million rows. Luckily, I caught it in QA before we went to production. BTW, this was a great way to show SQL escalation of locks.
So, when deciding on Simple vs Full, ask yourself, How much data can I afford to loose. The answer will lead to simple or full requirement.
Raymond Laubert
Exceptional DBA of 2009 Finalist
MCT, MCDBA, MCITP:SQL 2005 Admin,
MCSE, OCP:10g
March 19, 2010 at 9:40 am
Ray Laubert (3/19/2010)
I have to take acception to one of the comments above. The selection of simple vs full is not made based on when you do a full backup. It is made on recovery of the data. If I want to be able to recover my data to the point of failure, then I need to be in full recovery mode. Otherwise the transactions after the backup was completed will not be recoverable. Also the transaction log even in simple mode can grow very large depending on the type of transactions that are being applied. For example, batch updates and table updates all write to the TLOG and once they are committed then can be overwritten. A 1 GB table with an update to one column can use 2 GB of TLOG depending on how the updates are being performed.Just had a developer do that to me. He was reading the values, making a minor change, but updating all the columns in his update statement. Only the table was 100 GB with over 400 million rows. Luckily, I caught it in QA before we went to production. BTW, this was a great way to show SQL escalation of locks.
So, when deciding on Simple vs Full, ask yourself, How much data can I afford to loose. The answer will lead to simple or full requirement.
I think there's a small disconnect. The post isn't advocating that you make the choice between FULL and SIMPLE based purely on your backup rotation (the OP actually pointed out that the client didn't care for point in time recovery on their dev box). Rather, Gsquared is pointing out the problem you will end up with if you only run a DB backup while in FULL recovery mode, i.e. that your log will keep growing until you run out of disk space.
----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
March 19, 2010 at 9:44 am
Ray Laubert (3/19/2010)
I have to take acception to one of the comments above. The selection of simple vs full is not made based on when you do a full backup. It is made on recovery of the data. ....
I'm not sure whom you're disagreeing with.
If it's me, what I said is that truncating the logs has one minor advantage over setting the database in Simple recovery, in that it does give you a slight chance for point-in-time recovery, if you happen to have a situation where you haven't truncated yet, and you have intact log files. If so, you can backup the logs at that point, and then use them for PIT recovery. In Simple mode, you almost certainly don't have that capability.
Is that what you were responding to?
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
March 19, 2010 at 12:11 pm
Log will truncate after each backup. Granted the space will not be returned unless you shrink it, but given that there does not appear to be a lot of transaction the log space should not change from backup to backup unless the number of transactions change.
Raymond Laubert
Exceptional DBA of 2009 Finalist
MCT, MCDBA, MCITP:SQL 2005 Admin,
MCSE, OCP:10g
March 20, 2010 at 5:35 am
Granted the space will not be returned unless you shrink it
Returned to what ? That sapce will not be used further.?
March 22, 2010 at 4:42 am
Joy Smith San (3/20/2010)
Granted the space will not be returned unless you shrink it
Returned to what ? That sapce will not be used further.?
Returned to the OS. Freed up for any program to use, including SQL Server to re-use.
March 22, 2010 at 10:30 pm
Just one more doubt,
Suppose we truncate the log but we dont shrink it.
What will happen in that case ? Will SQL Server reuse the space in log or it will just keep on growing ?
Thanks.
Viewing 15 posts - 1 through 15 (of 19 total)
You must be logged in to reply to this topic. Login to reply