August 27, 2008 at 2:22 pm
I have a SQL 2005 database that pulls data from AS400 on a nightly schedule. I have the SQL database set to SIMPLE. Everything runs great until I hit one of the history files and then my log file grows to double the size of my data file. Since I do not need to be able to recover the data, I do not need a transaction log.
My understanding from what I have read is that no matter what setting I apply to the server, I still have to have space for that transaction log until after the transaction is complete and then SQL will truncate the file. The information I have read was from a few years ago.
Does this still apply to SQL 2005? If not, can someone enlighten me on how to keep the T-Log small. If it is still true, does anyone have any suggestions on how to manage an ever growing ever increasing monster of a T-Log? Or should I hit the boss up for a larger hardrive?
Thanks
August 27, 2008 at 2:37 pm
No matter what settings you use, SQL will log transactions. It's not an optional part of the system, it's how data integrity is guaranteed. Data changes are first written to the log and later to the data file.
Couple options.
If you can split the large imports into batches (perhaps with DTS/SSIS), the ransactions won't get so large and when the checkpoints run (which is what forces a log truncation in Simple) the part of the log that isn't necessary will be freed up for reuse.
Perhaps see if your import can be made to meet the conditions for a bulk operation (http://msdn.microsoft.com/en-us/library/ms177445.aspx). Bulk operations are minimally logged, meaning there's very little written to the tran log. Lots of limitations on what is a bulk operation though.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
August 27, 2008 at 2:39 pm
The transaction log can't be eliminated. The transaction is (essentially) processed in there, then in the actual tables.
If that transaction expands the file, then there's no real way around that, except to cut the transaction into pieces and do each one separately.
If, for example, you need to import 1-million rows of data, but that makes the transaction log grow too much, you might try running it on 100-thousand rows at a time, or 10-thousand at a time.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
August 27, 2008 at 2:42 pm
Thanks for the input. I'll check it out.
ASquared
August 27, 2008 at 5:56 pm
you can also backup the transaction with truncate or no_log option:
backup log your_db with truncate_only
Minh V.
August 27, 2008 at 8:40 pm
you can also backup the transaction with truncate_only or no_log option:
That's not going to help as the entire load is performed in a single transaction. The transaction log can only be truncated after the transaction is committed or rolled-back.
SQL BAK Explorer - read SQL Server backup file details without SQL Server.
Supports backup files created with SQL Server 2005 up to SQL Server 2017.
August 27, 2008 at 8:57 pm
This sounds like a job for SSIS to me. As someone said earlier, use SSIS to load the data in smaller batches.
That's what I would do.
For best practices on asking questions, please read the following article: Forum Etiquette: How to post data/code on a forum to get the best help[/url]
August 28, 2008 at 1:46 am
joemai (8/27/2008)
you can also backup the transaction with truncate or no_log option:backup log your_db with truncate_only
Since the DB is in Simple recovery mode, a checkpoint will truncate the log and there's no need for an explicit truncate log statement. Also note that Truncate_only is deprecated and does not work in SQL 2008.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
August 28, 2008 at 5:59 pm
In my case, even you have simple recovery turn on, the log files are sometimes still expanded faster than the trasanctions committed. I do have have shrink the log files frequently.
Minh
August 28, 2008 at 8:30 pm
If your transaction log's size is always larger than what you want it to be, perhaps it's best to leave it alone. See here for reasons why always shrinking your database files may not be a good practice.
SQL BAK Explorer - read SQL Server backup file details without SQL Server.
Supports backup files created with SQL Server 2005 up to SQL Server 2017.
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply