May 26, 2009 at 11:59 am
I have a database with two types of access - some tables contain data I really care about, because it is only stored in this system. I have another set of tables (with FK constraints between them) that nightly have all their data deleted, and reinserted via import from a distinct system, so I don't care if I lose data in that set.
The problem is, the nightly deletes and inserts into the second set are causing the transaction logs to balloon and fill up the disk. Right now, I am manually periodically taking the applications offline, backing up the db, truncating the logs, dbcc shrink file, and another backup. This is definitely sub-optimal.
Is there a way to turn off transaction logs for *only* the second set of tables?
May 26, 2009 at 12:04 pm
Nope. Regardless of recovery model, all inserts, updates, and deletes are first written to the transaction log. This is done to ensure the ACID priniciples are adhered to when database updates are completed. You may not be concerned about the updates to certain tables but SQL Server is as it must ensure a consistant database.
May 26, 2009 at 12:27 pm
What you could try is to put the tables that you don't care about into a different database and set it to simple recovery. Doesn't stop logging, but tran log will truncate itself.
Or, before the nightly load, set this DB to bulk-logged recovery and back to full after the load's finished. If any of the operations can be minimally logged, that will help. Or schedule log backups more frequently during the delete-insert period, which will stop the log from growing so large.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 26, 2009 at 1:30 pm
GilaMonster (5/26/2009)
What you could try is to put the tables that you don't care about into a different database and set it to simple recovery. Doesn't stop logging, but tran log will truncat.e itself.
So along these lines, is it possible to set a separate recovery mode on just the tables? Switching to a separate database is going to require a lot of code re-write, so probably isn't feasible in my situation.
May 26, 2009 at 1:37 pm
Nathan Davis (5/26/2009)
GilaMonster (5/26/2009)
What you could try is to put the tables that you don't care about into a different database and set it to simple recovery. Doesn't stop logging, but tran log will truncat.e itself.So along these lines, is it possible to set a separate recovery mode on just the tables? Switching to a separate database is going to require a lot of code re-write, so probably isn't feasible in my situation.
No. the recovery model is set at the database level, not the table level.
May 26, 2009 at 2:00 pm
Nathan Davis (5/26/2009)
GilaMonster (5/26/2009)
What you could try is to put the tables that you don't care about into a different database and set it to simple recovery. Doesn't stop logging, but tran log will truncat.e itself.So along these lines, is it possible to set a separate recovery mode on just the tables? Switching to a separate database is going to require a lot of code re-write, so probably isn't feasible in my situation.
In SQL 2005, moving tables might be easier than you think. If they don't have FKs pointing at them, you can move them to another database and create synonyms in the former database with the same name, pointing at the new location. I've used that trick a few times and it works quite well.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
May 26, 2009 at 2:11 pm
How are you referencing those tables in your code? There might be a way to move those tables without re-coding. You could try creating a synonym for the object in the other database, and your code would use the synonym.
For example:
CREATE SYNONYM dbo.MyLocalTable FOR simpledb.dbo.MyRemoteTable;
If your code references dbo.MyLocalTable, the synonym above would be used. So, the following:
INSERT INTO dbo.MyLocalTable ({column list}) VALUES ({values});
will still work. In some cases, it will still work as long as the synonym is added to the dbo schema and the user connecting has the dbo schema setup as the default schema. You will have to test that more fully.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
May 26, 2009 at 2:14 pm
One more thing - moving these tables to another database will require that the foreign keys be removed and managed through triggers instead of contraints. Make sure you understand how that is going to affect the application and your processes before making this kind of change.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
May 26, 2009 at 4:14 pm
Jeffrey Williams (5/26/2009)
How are you referencing those tables in your code? There might be a way to move those tables without re-coding. You could try creating a synonym for the object in the other database, and your code would use the synonym.
Moving the tables using synonyms sounds like it might possibly work. The set of tables I would move are mainly used in joins from the first set of tables to the second.
Does using synonyms have any performance implications since I would then be frequently joining across databases?
May 26, 2009 at 4:26 pm
Nathan Davis (5/26/2009)
Moving the tables using synonyms sounds like it might possibly work. The set of tables I would move are mainly used in joins from the first set of tables to the second.Does using synonyms have any performance implications since I would then be frequently joining across databases?
There shouldn't be any performance issues with cross-database queries. However, remember that you cannot have foreign keys setup between those tables anymore. I think you stated previously that there were foreign keys defined.
But, again - there should not be any performance issues.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
May 26, 2009 at 4:38 pm
Do you really need to reload the data completely for the tables, or could you work out a differential loading that only brings changes in, which minimizes the logging as well.
Jonathan Kehayias | Principal Consultant | MCM: SQL Server 2008
My Blog | Twitter | MVP Profile
Training | Consulting | Become a SQLskills Insider
Troubleshooting SQL Server: A Guide for Accidental DBAs[/url]
May 26, 2009 at 5:14 pm
Thanks for all your help.
To summarize, there were 4 main suggestions:
1. Put the tables I don't care about in a separate database
2. Before the nightly load, set the db to bulk-logged, and back to full-recovery after the nightly load. edit: Then (per next post) you must back up the transaction log after switching back to full.
3. Backup the transaction logs more frequently during the delete-insert period to another disk.
4. Rework the import scheme to do differential loading.
For option 1, I need to:
1. Ensure that there are no foreign keys between the "don't care" tables and the "do-care" tables. If there are, I will need to convert these constraints into triggers.
2. Move the "don't care" tables to a separate "don't care" database.
3. to avoid recoding, create synonyms to the "don't care" database within the "do-care" database.
4. Set the recovery mode on the "don't care" database to "simple" and keep the "do-care" database at full-recovery mode.
5. Test to make sure everything works! There could potentially be problems with some inserts using the synonyms, possibly due to permissions.
Furthermore, Option 1 should not cause any major performance related issues from the cross-database queries.
May 27, 2009 at 1:11 am
If you go with option 2, you must back the transaction log up after switching back to full. This is because you can't take tail log backups if the DB is in bulk logged and there are any bulk operations in the log and you can't restore to a point in time if there are any bulk operations in the tran log that contains that time.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 13 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply