March 25, 2008 at 9:13 am
I have a database that gets detached, archived and then reattached every day. I know this isn't a good way to work but just go with the scenario for the moment. What are the performance implications on the database from doing this? Will this screw up indexing on tables?
March 25, 2008 at 9:51 am
As far as I'm aware the only thing that could be affected are the statistics, so when you're detaching make sure that the @skipchecks parameter is set to false (or just don't supply the parameter).
March 25, 2008 at 12:44 pm
Thanks for the reply. I'm just trying to figure out how two different databases on seperate machines with similar hardware would be performing so differently (machine A takes 6 seconds to run a query that machine B takes less than a second to run). I thought that perhaps it had to do with the indexing and the fact that the databases were being attached/detached every day. But, after comparing indexes on the table in question I see that the indexes are the same on both machines' tables and are not fragmented. I'm at a loss for what the differences are because, aside from the number of pages (machine A's table has about 200 more pages than machine B's table), i see no difference. I'm not an expert so I feel like there are some big things that i'm overlooking. Do you know of anything I should be checking?
March 25, 2008 at 12:51 pm
Have you tried setting the "update statistics" to ON during the detach? (I'm paraphrasing the actual name). It would at least make sure that the stats are current upon attaching? Or - you might care to manually do that before the detach.
----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
March 25, 2008 at 10:20 pm
Is the timing difference ongoing? Detaching any database clears the plan cache on the server - there would be some overhead in compiling the queries. This might produce a different plan, potentially slower if the stats aren't accurate. Machine A would also need to load the buffer cache at least once.
March 25, 2008 at 10:39 pm
After updating the statistics, the queries are incredibly fast. The timing issue was ongoing but didn't follow any pattern. I am going to update the statistics prior to each detach and track the performance over time. Thanks for all of your replies!
March 26, 2008 at 12:07 am
Paul Koanui (3/25/2008)
Thanks for the reply. I'm just trying to figure out how two different databases on seperate machines with similar hardware would be performing so differently
Are the execution plans the same on the two machines? Is the number of physical and logical IOs the same? (Use SET STATISTICS IO ON to see the reads)
Is the load on the two machines the same? Is there blocking?
200 pages difference in size is not an insignificant number. It could well be enough to get the optiiser to pick a different plan. How amny rows and pages total for the table?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply