January 12, 2005 at 9:44 am
Here's obscurity for you... I need to know how to load a trace file generated by SQL Server 7.0 into SQL Server 2000, without using SQL Profiler.
The file is created with what I'd call the
xp_trace_addNewQueue
xp_trace_setQueueDestination
xp_trace_startConsumer
framework. This produces a file (forced to have the .trc extension); all testing I've done was with the file once the trace was stopped (xp_trace_destroyQueue) and copied to the box mentioned below.
I'm hoping to load the files produced by numerous boxes (both 7.0 and 2000) into a central SQL 2000 box, using fn_trace_gettable. The problem is, this doesn't seem to be able to read 7.0 trace files. Any ideas on what's going on or how to work around it would be greatly apprecieated. (It has to be automated--manual intervention via Profiler is not an option.)
Why, yes, this is Sarbanes-Oxley related. How'd ya guess?
Philip
January 12, 2005 at 4:13 pm
Is there any reason why you have to use a .trc file do to this? If you set the trace to dump into a table instead, then you could pull the data to the MSX server via a DTS query? Much easier to maintain and keep up.
January 13, 2005 at 8:12 am
I was hoping to write to a file for two reasons. First, as near as I can tell SQL 2000 traces can only be manually configured to write to files (which begs the question: how does the data get into Profiler?) If the trace data gets dumped into trace files from both 7.0 and 2000 servers, then I only need to write one routine to read and use the data (in SQL 2000, of course).
Secondly is the inevitable performance question: what overhead is required to support this implementation? The trace routine would have to go on sensitive production boxes, and there is much interest in not bogging down the server with superfluous activities. I haven't found anything that says anything about trace impact on machine performance, and we don't have the resources to set up a machine to benchmark things, so I am reduced to vague rationalizations and hand-waving. My thoughts are: A trace configured to minimal columns and infrequent events that just streams minimal data to an open file handle (as opposed to persistent database connection, let alone "forwarded server") *should* have minimal impact on the system.
In any case, I ultimately stumbled over perhaps the two entries on the entire web that seem to address the subject. Apparently there was a utility, a "tracefile" com object, in the SQL Server resource kit 7.0 (and, thus, also in MS Backoffice 4.5), that provided programmatic access to these trace files. Apparently not really supported by Microsoft then, and less so now.
I'll be proposing (a) hacking the 7.0 trace file and writing our own interpreter, (b) having someone do a lot of manual pointing and clicking in Profiler, or (c) not doing this project. I will mention the table solution, but that won't sit well with some folks, at which point I get to ask are they willing to exchange possible performance for SOX compliance and overall security... at which point higher-ups will debate the matter for a few weeks and I can get on with other things.
No matter what, if there's any information or wondrously intuitive ideas out there on this subject, I'd really like to hear them. (And if anyone has benchmarks or links to articles on the impact of running trace OR profiler on a system, please post!)
Philip
January 13, 2005 at 11:08 am
We're being hammered by HIPAA on our side, and Profiler is the worst performing utility that comes with SQL Server that I've had experience with. We are looking at third party solutions. I believe the one we like the best at this time is SQL Guard which doesn't run on the box at all. It's a device that piggy backs onto the network connection from where-ever to the SQL Server, uses Linux, and collects every bit of Structured English Query Language that is issued to the SQL Server and who sent it from where. The method you're looking at smacks a bit like the C2(?)security(?) that 7.0 touted it was compliant with. I worked at an Air Force base at that time, and I used it for about a day and a half, and had to turn it off due to performance, and lack of harddrive space. I believe that piece uses a form of Profiler. I don't believe there are any easy answers to this, but at least the third-party solution wouldn't be the DBAs headache as much as it would be the Network Admins'.
January 13, 2005 at 8:43 pm
Thank you for the "in the field" report. The few reports I have heard are negative, and I haven't heard anything factual on the positive side, so consider me convinced.
Sadly, we don't have the wherewithal (i.e. many of the usual excuses) to look into third party solutions. The current plan being reviewed is to tun on "standard" logging (all succesful and failed logins) and work something from there--a plan which bounces out of the database group and into the Network group. One can only wonder where the hot potato will land next.
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply