July 25, 2005 at 12:07 am
Hi,
I have a job that reindexes a database. When ever this job is run it fails with the following message
Executed as user: @@@@. The log file for database 'xxxx' is full. Back up the transaction log for the database to free up some log space. [SQLSTATE 42000] (Error 9002) The statement has been terminated. [SQLSTATE 01000] (Error 3621). The step failed.
Howmuch ever the size I increases even then it is running out of space.
I have now set to unrestricted file growth. When I run
DBCC SQLPERF(logspace) I see this output
Database Name Log Size (MB) Log Space Used (%) Status
XXXXXX 4449.5547 99.998291 0
I dodn't want to truncate the log (production). We have a scheduled Transaction log file backup. But when this job is run it fills up TR log.
What best i can do
Any help is appreciated
Helen
--------------------------------
Are you a born again. He is Jehova Jirah unto me
July 25, 2005 at 1:30 am
Hi Helen,
I have encountered this before and the quickest and easiest way was to set the db to simple mode run the job and then set it back to full recovery. This stopped the log file growing.
ALTER DATABASE [DBNAME] SET RECOVERY SIMPLE
INDEX JOB
ALTER DATABASE [DBNAME] SET RECOVERY FULL
Hope this helps..
Hope this helps...
Ford Fairlane
Rock and Roll Detective
July 25, 2005 at 1:36 am
Hi,
This job will run for nearly 30 min. We have scheduled transaction log backup.. How will i be able to do this. Is it advisable
Thanks for your quick reply
Helen
--------------------------------
Are you a born again. He is Jehova Jirah unto me
July 25, 2005 at 1:44 am
Helen,
In our situation, the job ran at night around 1 am would take any where from 1 - 2 hours to run. As long as we had a back up before I was happy to not have anything done during that time and re start the jobs once it was complete.
My times would be last back up at 12:55pm and resume at 3:30 am
every 15 min. after that.
Hope this helps...
Hope this helps...
Ford Fairlane
Rock and Roll Detective
July 25, 2005 at 1:59 am
Thank you. Let me try this solution the next time.
Cheers
Helen
--------------------------------
Are you a born again. He is Jehova Jirah unto me
July 25, 2005 at 11:04 am
I do the same as Ford Fairlane mentions ..... works great.
You can either temporarily disable your transaction log backup (and remember to enable it later) or just let the tranaction backups fail, ignore the fact that they failed, and they'll work again once you switch back to FULL recovery.
July 25, 2005 at 11:57 am
I didn't want to risk putting the database in simple mode or single user mode at night. I was concerned that it might not automtically make it back to multi user and customers would attempt to access the data while I was still asleep.
I found a stored procedure that allows me to only reindex indexes that have more than a certain percentage fragmentation ( 10% in our case ). Otherwise, dbcc reindex completely rebuilds all indexes whether they need it or not. This eliminated the huge trans log growth for us.
I don't have the link for this procedure right now but search for "reindex fragmentation" either here or on google an you'll find it.
Randy
July 25, 2005 at 11:32 pm
I agree, if you switch to simple recovery and then back to full recovery you will not be able to use your next set of transaction log files for a point in time restore. Too risky for my blood, I'd much rather sleep well at night. What I do is increase the frequency of transaction log backups when I'm reindexing my databases. It works out well since my tranlog backups get zipped and moved of the server almost right away and more space for extra index data.
Another good option is to switch to "Bulk Logged" recovery mode while you are running the index rebuilds or even permenantly if yuo can afford to. Switchin to this recovery mode will allow you to maintain your trasaction log backup schedule without performing another full backup, but extra data (like index rebuilds) are not stored in your transaction log files.
Enjoy,
Dan
July 25, 2005 at 11:38 pm
Hi Randy and Dan,
Thanks for substantiating to that. Quiet helpful for me, I was looking for such a solution.
Helen
--------------------------------
Are you a born again. He is Jehova Jirah unto me
July 26, 2005 at 12:39 am
check the pros and cons:
Breaking your recovery model can have its consequences !
If I breake it, I go from FULL to BULK, but rarely to simple.
I have some databases of about 20Gb datafiles, wich have logfiles of 18Gb.
After a reindex / indexdefrag I create an extra incremental logbackup, xcopy it to the safezone and then create an extra logbackup with init to take away the volume-pressure.
This way my db is PIT-recoverable, also for a time during the reindex.
users can safely use the db during reindex/defrag.
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
July 26, 2005 at 11:28 am
You might want to schedule reindexing on the weekend, putting your db in simple recovery. Also you might create a script, using sp_denylogin to certain accounts,so your maintenance would run faster. By running it on the weekend you could always chek the consistency of your DB after that: DBCC CheckDB, which you may want to run regularly.
July 26, 2005 at 1:17 pm
The main problem with log growth with Reindex will be when you reindex a clustered index as the index and data pages are the same so moving pages in the index = moving pages of data that needs logging. For these you can probably get most of the gain using lighter weigt DBCC INDEXDEFRAG which should be a bit less heavy on the logging
James Horsley
Workflow Consulting Limited
July 27, 2005 at 12:18 am
Indexdefrag is also less blocking than dbreindex, but it doesn't do the same phisical thing.
We use Indexdefrag when our productionmill is active (24/7) and we use dbreindex during planed production maintenance (24h +- every 6 weeks) so the interference is less.
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
Viewing 13 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply