April 7, 2010 at 6:46 am
Paul White NZ (4/7/2010)
Chris Howarth-536003 (4/7/2010)
If this is the only reason that you are looking to rework the code then I would consider turning off the default trace, scripting it, amending the script to suit your requirements (for instance excluding queries issued by your app, increasing file size/count etc...) then create a startup stored proc to define and start your new trace.I see your point Chris.
Mind you, even if that is the main reason, I would still go further and look at using permanent tables (by whatever method, maybe SSIS) because tempdb tends to be quite busy enough as it is.
Agreed, it's not ideal but it could be a workable option for the OP. Personally I'd prefer to leave the default trace alone and go with a permanent table solution, or just not worry about the amount of information being logged by the default trace.
If the transaction log growth of the core database is a concern preventing the OP from using permanent staging tables, then another option would be to create a database specificially to contain the permanent tables, and the recovery model of the new database set to 'simple'. This suggestion is based on the assumption that point-in-time recovery of the staging database is not required.
Chris
April 7, 2010 at 3:26 pm
These are all great suggestions. Thanks for all the input guys. I did consider modifying my default trace, but that just didn't sit well with me. I could end up missing other important issues that could pop up later.
I'm liking the idea of populating permanent tables using bcp and truncate to minimize logging. Think I'll give that a shot.
April 7, 2010 at 6:36 pm
bobmclaren (4/7/2010)
I'm liking the idea of populating permanent tables using bcp and truncate to minimize logging.
Me too :w00t:
Think I'll give that a shot.
If you get time, post back to let us know how you got on - any issues you encountered and so on...?
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
April 8, 2010 at 8:40 am
Will do. I'm on vacation this week, and will of course be spending quite a bit of time catching up when I get in next week, but I'll get to it and report back.
April 14, 2010 at 1:04 pm
I'm back with good news!
I was finally able to get around to changing that code and implemented the permanent table solution. As a result, the default trace logs are no longer being pummeled with tens of thousands of "Object Created" events every hour. I could almost see my database server's shoulders relax as it exhaled a sigh of relief. As an added bonus, I also see that the recompile events have quieted down quite a bit as well and I dare say the application appears to be running faster.
I'm stoked. Thanks one and all.
April 14, 2010 at 4:43 pm
bobmclaren (4/14/2010)
I'm back with good news!I was finally able to get around to changing that code and implemented the permanent table solution. As a result, the default trace logs are no longer being pummeled with tens of thousands of "Object Created" events every hour. I could almost see my database server's shoulders relax as it exhaled a sigh of relief. As an added bonus, I also see that the recompile events have quieted down quite a bit as well and I dare say the application appears to be running faster.
I'm stoked. Thanks one and all.
Awesome! Thanks for taking the time to let everyone following this thread know 🙂
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
April 19, 2010 at 7:34 pm
I agree. Thanks, Bob!
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 7 posts - 16 through 21 (of 21 total)
You must be logged in to reply to this topic. Login to reply