What are the performance implications for this scenario?

  • I have a database that gets detached, archived and then reattached every day. I know this isn't a good way to work but just go with the scenario for the moment. What are the performance implications on the database from doing this? Will this screw up indexing on tables?

  • As far as I'm aware the only thing that could be affected are the statistics, so when you're detaching make sure that the @skipchecks parameter is set to false (or just don't supply the parameter).

  • Thanks for the reply. I'm just trying to figure out how two different databases on seperate machines with similar hardware would be performing so differently (machine A takes 6 seconds to run a query that machine B takes less than a second to run). I thought that perhaps it had to do with the indexing and the fact that the databases were being attached/detached every day. But, after comparing indexes on the table in question I see that the indexes are the same on both machines' tables and are not fragmented. I'm at a loss for what the differences are because, aside from the number of pages (machine A's table has about 200 more pages than machine B's table), i see no difference. I'm not an expert so I feel like there are some big things that i'm overlooking. Do you know of anything I should be checking?

  • Have you tried setting the "update statistics" to ON during the detach? (I'm paraphrasing the actual name). It would at least make sure that the stats are current upon attaching? Or - you might care to manually do that before the detach.

    ----------------------------------------------------------------------------------
    Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?

  • Is the timing difference ongoing? Detaching any database clears the plan cache on the server - there would be some overhead in compiling the queries. This might produce a different plan, potentially slower if the stats aren't accurate. Machine A would also need to load the buffer cache at least once.

  • After updating the statistics, the queries are incredibly fast. The timing issue was ongoing but didn't follow any pattern. I am going to update the statistics prior to each detach and track the performance over time. Thanks for all of your replies!

  • Paul Koanui (3/25/2008)


    Thanks for the reply. I'm just trying to figure out how two different databases on seperate machines with similar hardware would be performing so differently

    Are the execution plans the same on the two machines? Is the number of physical and logical IOs the same? (Use SET STATISTICS IO ON to see the reads)

    Is the load on the two machines the same? Is there blocking?

    200 pages difference in size is not an insignificant number. It could well be enough to get the optiiser to pick a different plan. How amny rows and pages total for the table?

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply