July 6, 2009 at 11:51 am
I have a SQL 2000 server that I am migrating over to SQL 2005, one DB at a time. We have accumulated several databases over the years and I am certain that many of them aren't being used any more so I'd like to take them offline rather than migrate them. Rather than just take them offline and wait for someone to send me a panicked email, I'd like to determine what databases have been used in the last 90 days. If they haven't been used, then I'd feel more confident in taking them offline.
Searching through perfmon, profiler, and the sys.* tables, I couldn't find anything that jumped out at me as a means for determining when the last time was a database was used.
Any suggestions? Could I also track who has used it and from what IP? I suppose I could run an sp_who2 every 10 seconds, but then I risk missing quick transactions that occur between polling, and meanwhile put unnecessary load on the SQL box.
July 6, 2009 at 12:55 pm
Set up and run a trace. Books Online has the details on how to do that. (MSDN too, of course.)
That's what I've done in similar situations.
Won't tell you the last 90 days, but there really isn't a way to do that so far as I know. What it will do is tell you all activity during the time you run the trace. I've always found that more than adequate to this need.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
July 6, 2009 at 4:44 pm
Thanks for the reply, but I really do need to look back 90 days, since some of our applications only run quarterly. I would run a trace with SQL Profiler as you suggested, but that seems like a lot of overhead, and a lot of data to sift through just to determine who logged on last and from where.
July 7, 2009 at 6:55 am
Profiler creates a lot more overhead than a server-side trace that dumps to text files. That creates very, very little processing overhead. Might matter if you're looking at thousands of transactions per second, but on servers that deal with a hundred or so transactions per second, I've run server traces without anyone noticing the performance hit.
I don't know of a way to look back 90 days unless you've had a trace running. If you find one, please let me know, since it would come in very handy.
You could get some of that data by parsing the transaction log, if your databases are all in Full recovery mode, but that wouldn't get selects, it would only get insert/update/delete actions. That won't do for what you need.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
July 7, 2009 at 1:09 pm
There's no way to do this other than logging the data. It doesn't magically exist anywhere in the server.
A server side trace, which you can template out from Profiler, will work. Minimize the data and events you need, store it in files, set it to run on startup of the server (in case of a reboot). You can build a proc to set up the trace and then mark that as a startup procedure.
If you need to go back, you can load the data into Profiler or a table and then query/set filters.
July 7, 2009 at 2:34 pm
I was afraid thar would the case. I was hoping that either:
A) There would be a simple "Last Accessed Date" in some system table somewhere
or
B) I could write some sort of trigger at the database level that would update a "LastAccessed" table for me whenever a SPID attached to a database.
Oh well, at least now I know for sure. Thanks for the help.
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply