April 4, 2005 at 3:15 am
What should be the daily routine of a ideal DBA apart from all these
Watching SQL Server Logs.
watching event viewer.
Monitoring counter logs
Viewing trc file/trc report
Backup
April 4, 2005 at 3:35 am
Make large cup of non-decaff coffee
I would put check backups first closely followed by check disk space and current server activity.
I have my SQL Servers email me with the results of the nightly maintenance plans and various counters.
Within Outlook I have various folders and rules that redirect the various server messages so I can see my SQL Server stuff vs the spam.
April 4, 2005 at 3:38 am
I guess this all depends on the actual role. I know that a lot of DBA's do a lot aside from just administration.
Things I check on a daily basis...
* Backups
* Scheduled Jobs
* Disk usage
* Event Viewer
* SQL Error Logs
* SQL Agent Logs
The company I work for have no internal application systems so there is no need for development as such. As I am still new to the company, I spend the rest of my time trolling through the vendor database looking (and finding) potential bottlenecks and making recommendations for change.
Thats just a start. I'm sure you'll get a lot more good advice on this post.
April 4, 2005 at 5:14 am
Check Security bulletin boards.
Test new patches (if any) in test environment.
Planning for when things go wrong (as they will).
Writing scripts to make your job easier.
Testing/developing optimisations for frequently run queries.
Maintaining written server logs (to make sure that anyone can flick through to see what issues have come about and how they were fixed, or if they're liable to arise again).
Then there's the talking amiably with database users as and when possible. Having a friendly working relationship with them helps when things fall apart. People that like you are far more likely to be patient with you than ones you've leaned on hard.
April 4, 2005 at 5:40 am
Believe it or not this happened to me the other day and this looks as good a place as any to post:
I went to a DBA at the place where I work to validate my memory. He said "let's consult the trash can"... Literally pulled a magazine out of it promptly opened to the proper page and said here.. "You can keep this"
Now how many of us would like to have that as part of our job????
Good Hunting!
AJ Ahrens
webmaster@kritter.net
April 4, 2005 at 8:28 am
What counter logs do you regularly keep track of? And what are you looking for in the trc file/report? I'm pretty new to performance tracking (as opposed to optimization) and would like more info on the day-to-day monitoring of performance. Does running a performance/counter log make a big impact on the performance of your servers?
Question for everyone as a whole: I'm a new DBA who's taken over for a previous one. He had profiler running on the two web production boxes to watch for security issues. Do you guys/gals suggest having profiler running on a machine, or running a trace through T-SQL and monitoring a file/table? I would think the profiler option would be problematic because of the performance impact and it would have to be started and running (obviously). I suppose the T-SQL method could be automatically started on start of the DB.
Thanks!
April 4, 2005 at 11:05 am
Running a profiler trace has a significant performance impact on the server so I would not recommend using it as an auditing solution. As for daily tasks, I recommend running through entire DR scenarios with your latest backups at least once a month. The only way you can ensure disaster recovery is to test a real world scenario. This should include not only restoring your user and application databases but also the system databases. In a disaster scenario you don't want to waste time looking through online message boards for help in how to restore the master db ;}
April 4, 2005 at 11:16 am
Running the GUI profiler will absolutely cane your server. The command line utility has a much lower overhead.
April 4, 2005 at 1:51 pm
Khalid
April 4, 2005 at 2:00 pm
I think that what DBA should be watching is what the developers are up to in between watching the Red Sox games.
All other time should be dedicated to writing scripts according to Rich James (totally agree!) to make your life easier. Use WMI scripting and VB scripting to check on everything including drive space and backups and let it send you an email if something goes wrong. Do not forget to run an independent script from another computer that will confirm that your original script is running fine.
Yelena
Regards,Yelena Varsha
April 4, 2005 at 5:16 pm
One thing that I've found handy as part of the checking routine. I run SQL locally on my workstation and all the other SQL Servers forward events to it. Makes checking the event logs so much easier.
--------------------
Colt 45 - the original point and click interface
April 4, 2005 at 11:21 pm
For many of those things mentioned here I use Embarcadero Performance Center.
I check my jobs on a website. The makewebtask functionality in SQL 2000 makes life very easy for me. I created a job/sp that writes all the wanted information of my server to an html file. I check those pages several times a day. They are refreshed every hour.
What do I see on the html file?
I am making a construction right now, that will display information of all servers on one page, and will display only the error's. Now I have only 6 SQL instances in a production environment to manage, but later this year this wil probably be much more, so checking all pages is a lot then.
Also I just started checking MOM 2005 yesterday, which will make life much easier probably.
So, these two things make my life easy for the day to day checking.
Specially the Performance Center tool is very good. You can set your thresholds yourself.
April 4, 2005 at 11:28 pm
How about using the SQL Health and History Tool to track server and database info.
--------------------
Colt 45 - the original point and click interface
April 5, 2005 at 4:35 am
All of this is good but many suggestions would struggle with say 60 servers to manage.
In General the DBA has to define what is right for their environment - defining the processes to manage and alert has to come first - then it's a case of managing the information you gather from these.
Typically I would know if backups had failed, disks were short of space, jobs had failed, servers were not running - probably before I even got to work!!! I'd say if you have to manually check items such as backups then your processes are wrong.
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
April 5, 2005 at 5:13 am
For a large number of servers - you really need proactive monitoring of your hardware, disk space and SQL Server jobs.
Once everything is in check and running fine get a coffee and check your tee time
Viewing 15 posts - 1 through 15 (of 16 total)
You must be logged in to reply to this topic. Login to reply