October 21, 2011 at 11:01 am
I have a script that dynamically loads all default trace files into a temp table so I can find grow and shrink events. I am wondering if this is a good way to monitor these events. I am considering setting up a trace that will run at all times (to start when the server starts) that only captures grow and shrink events so I can read those trace files instead of the default trace. I did a few internet searches for canned solutions and nothing seemed to fit just right. Has anyone solved this problem? What are your thoughts about the two methods or other ways to monitor?
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
October 21, 2011 at 11:45 am
I believe you could certainly run a server side trace for this but I am wondering why you are so concerned about these events.
How are you using the data and what value does it hold?
Most people only watch the available space on disk, I will often take snapshots of database sizes periodically for predictions about space, but I have never monitored the events themselves.
CEWII
October 21, 2011 at 11:59 am
They're in the default trace, so you can use that with no extra work required.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 21, 2011 at 12:15 pm
Elliott Whitlow (10/21/2011)
I believe you could certainly run a server side trace for this but I am wondering why you are so concerned about these events.
I have adjusted my auto grow settings to sensible sizes but I am concerned because I prefer to grow my files manually during a period of low usage. If log files grow I want to know within 24 hours. TMK log files cannot be instantly initialized in SQL 2008 so it can be quite disruptive. I have read that too many auto grow activities across various files on the same physical disk can lead to high levels of external file fragmentation. I have no practical experience with external file defragmentation when it comes to SQL Server database files but from what I have read it is difficult to fix.
Most people only watch the available space on disk, I will often take snapshots of database sizes periodically for predictions about space, but I have never monitored the events themselves.
I do watch overall disk space, but I also monitor free space within the data files themselves so I can plan ahead as to when I should grow my files. Also, if there is a spike in space used it will be shown in my reports.
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
October 21, 2011 at 12:18 pm
GilaMonster (10/21/2011)
They're in the default trace, so you can use that with no extra work required.
That sounds good. I already have working code for reading the default trace into a temp table so I'll just wrap it up in a proc and add some mail alerting. Thanks Gail 🙂
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply