February 22, 2011 at 2:44 am
Everything we monitor we are using Reporting Services.
and we building datasets for every info (failed backups/ SQL errors/ CPU usage / PLE / diskspaces /growth ect ect) we want and make from there the report up.
For the growth of the database we using an SP develop by Richard Ding and adjusted it to our needs
February 22, 2011 at 2:47 am
Thanks everyone for all the information, it's very useful. I will look into all of these.
February 28, 2011 at 2:40 am
Grant - I am utterly confused. I did as you suggested and created tlog backups every 10 minutes on one of our systems that is in full recovery mode and struggles with large tlog. So the database backs up fully at 10pm, the tlog backs up every 10 mins round the clock. I have found that some of the tlog backup jobs fail saying there is no current full backup, despite the fact it is still there on the drive (2 thirds seem to be successful), and also, I still come in on a Monday morning to find a log of 20Gb, way over where it should be.
So where am I going wrong?
I am still left with needing to shrink the log!
My colleague says that he looked into this many times on google, and found that it is a known problem and there was even a shrink script he found because of it. Since then he's just used the shrink script as he's never found a solution. It would seem I@m encountering the same now.
I'm not sure why other people don't have this problem.
Can you help at all?
Many thanks.
February 28, 2011 at 6:37 am
It's not a "known problem" it's a particular behavior. I'm not sure why some of the logs thought that there wasn't a full backup, but it's a switch that gets thrown on the database, so if it says there isn't one... there probably isn't.
As to the log size, the log backups truncate the data within the log, they don't shrink it. If your log is constantly growing to a particular size and you're shrinking it over & over, you're fragmenting the heck out of your disks. Let the log get to the size it needs to be. Unless it's outrageous, and 20gb isn't depending on the size of the database being serviced, leave it there.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
February 28, 2011 at 6:45 am
As for the "no current full backup" problem, check the SQL Server log before the transaction log backup that ended in that error and see if the database switched recovery models from FULL to SIMPLE. You might also look for a "Backup Log with Truncate Only" statement being run somewhere as well. One of those two are the most likely culprits for that problem.
David
@SQLTentmaker“He is no fool who gives what he cannot keep to gain that which he cannot lose” - Jim Elliot
February 28, 2011 at 7:01 am
How do these databases gets loaded with data?
If it complains about a full backup missing then the database might have been switched to simple recovery and then back to full (you can see this in the errorlog).
If the load procedure switches your db to simple recovery and then to full once everything is loaded they have to plan an extra step to take a full backup.
If your log still grow too fast, even though you grab a backup log every 10mins, you can add an alert that'll trigger a backup log once the log reach 70% full.
That way you still have a data loss of maximum 10mins and you have a way to cope with big log size change.
March 1, 2011 at 6:33 am
Paula-196779 (2/15/2011)
Hi,I wondered if anyone could recommend a monitoring tool for SQL Server? At the moment I have around 30 servers to monitor, and although I manage this fine through Enterprise Manager for most things there are a couple of areas that would be a lot quicker if I could have an 'at a glance' kind of view. These things are filegroup sizes, particularly log files, failed jobs, and the last few hours of the SQL Server error log.
I was thinking of a dashboard kind of thing. It is mainly because out of the 200 ish databases we have there are around 40 whose log files grow inordinately, and we have to shrink them from time to time. As far as I'm aware this is a bit of a known error in SQL Server and can only be fixed by going into Simple recovery mode which isn't an option for us. So, I would love to see on a daily basis which files are big and need to be dealt with. Along with the other things I've mentioned....
Any ideas would be great.
Thanks,
Paula.
Hi Paula, though it won't cover all of your needs, you might want to consider SQL Agent Insight (www.brentec.ca) that will do your Agent monitoring with configured thresholds and email alerting. Over 40 metrics for Agent are monitored with the ability to disable monitoring for groups of metrics that might not be as important to you.
Regards.
Viewing 7 posts - 16 through 21 (of 21 total)
You must be logged in to reply to this topic. Login to reply