May 23, 2017 at 6:00 pm
It may sound extremely simple and might be it is.... but I am trying to fill the transaction log of a database in SQL Server 2014. This is what I did till now -
1. Created a database with a 20 MB log file with no autogrowth.
2. Changed recovery model to Full.
3. Turned off implicit transactions
4. Created a table with a couple of columns and started inserting the data within an explicit transaction.
The transaction fails with below error, but releases the log space.
Msg 9002, Level 17, State 4, Line 28
The transaction log for database 'LogTest' is full due to 'ACTIVE_TRANSACTION'.
What else do I need to do to make sure the log gets 100% full.
May 23, 2017 at 6:23 pm
abhij33t - Tuesday, May 23, 2017 6:00 PMIt may sound extremely simple and might be it is.... but I am trying to fill the transaction log of a database in SQL Server 2014. This is what I did till now -
1. Created a database with a 20 MB log file with no autogrowth.
2. Changed recovery model to Full.
3. Turned off implicit transactions
4. Created a table with a couple of columns and started inserting the data within an explicit transaction.The transaction fails with below error, but releases the log space.
Msg 9002, Level 17, State 4, Line 28
The transaction log for database 'LogTest' is full due to 'ACTIVE_TRANSACTION'.
What else do I need to do to make sure the log gets 100% full.
It did get 100% full. That's why it failed an rolled back. 😉
If you want it to stay full, you're going to need to sneak up on it with a loop that checks how full the log file is before it does another insert.
I'm curious, if you don't mind. What are you trying to do here? Why do you want to fill the log and leave it filled?
--Jeff Moden
Change is inevitable... Change for the better is not.
May 23, 2017 at 7:33 pm
If you must know my secret.... I am trying a smoke test, if we receive an alert from our Customized system in case log gets full for any database. Part of company process. I will try your suggestion.
May 24, 2017 at 2:28 am
abhij33t - Tuesday, May 23, 2017 6:00 PMIt may sound extremely simple and might be it is.... but I am trying to fill the transaction log of a database in SQL Server 2014. This is what I did till now -
1. Created a database with a 20 MB log file with no autogrowth.
2. Changed recovery model to Full.
2.5 Take a full backup to initialise the log chain
3. Turned off implicit transactions
4. Created a table with a couple of columns and started inserting the data within lots of small explicit transactions.
See changes^
And your logging system probably should check for % used > , not for = 100%, as the latter won't usually happen for long periods of time as whatever gets the log to 100% gets rolled back, and some reserved space gets released leaving the log very close to full, but not 100% full.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 24, 2017 at 3:11 am
Try the transaction in batches
May 24, 2017 at 9:01 am
abhij33t - Tuesday, May 23, 2017 7:33 PMIf you must know my secret.... I am trying a smoke test, if we receive an alert from our Customized system in case log gets full for any database. Part of company process. I will try your suggestion.
Sounds interesting. And, yes... see the step Gail added. It will be necessary to instantiate the log file chain so that the "sneak up" method works.
--Jeff Moden
Change is inevitable... Change for the better is not.
May 24, 2017 at 9:16 am
To build trends in behavior over time, why your logspace is used and can't be reused, include checking log reuse reason with it maybe (suggestion)
select name, log_reuse_wait_desc from sys.databases
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
This thing is addressing problems that dont exist. Its solution-ism at its worst. We are dumbing down machines that are inherently superior. - Gilfoyle
June 7, 2017 at 12:45 am
abhij33t - Tuesday, May 23, 2017 7:33 PMIf you must know my secret.... I am trying a smoke test, if we receive an alert from our Customized system in case log gets full for any database. Part of company process. I will try your suggestion.
When a log file gets full it's already too late.
It's better to define a threshold (may be different for each database) and raise an alarm when a log file gets populated over that threshold.
I actually prefer not to raise an alarm but automatically initiate an extra log backup.
Helps preventing disasters during reindexing exercises.
_____________
Code for TallyGenerator
June 7, 2017 at 9:30 am
Sergiy - Wednesday, June 7, 2017 12:45 AMabhij33t - Tuesday, May 23, 2017 7:33 PMIf you must know my secret.... I am trying a smoke test, if we receive an alert from our Customized system in case log gets full for any database. Part of company process. I will try your suggestion.When a log file gets full it's already too late.
It's better to define a threshold (may be different for each database) and raise an alarm when a log file gets populated over that threshold.
I actually prefer not to raise an alarm but automatically initiate an extra log backup.
Helps preventing disasters during reindexing exercises.
Heh... the way I prevent disasters during reindexing exercises is to not do any reindexing exercises. It might not work for everyone but I've not done any index maintenance since 17 Jan 2016... almost a year and a half ago. Performance actually improved over the first six months and I've not had the "wasted space" explosion that I originally expected. I DO keep up with stats rebuilds, though.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply