Reserved Pages Details in DBCC UPDATEUSAGE

  • Hi All:

    I am trying to get some details on the RSVD pages in SQL SErver 2000 SP3.  I have few tables which are exceeding 3 Gigs in our Data warehouse database.  Noticed that when I execute the DBCC UPDATEUSAGE the RSVD pages value changes dramatically (as shown below).  I have currently set this as a weekly job to update usage.

    Have few questions:

    1.Can you please provide me with the details of whats happening in the background? 

    2.What is the consequence of having a large rsvd pages (other than space consumption) - will it affect performance?

    3.Is there a table parameter which I should be setting to minimize this change?

     

    DBCC UPDATEUSAGE: sysindexes row updated for table 'TABLENAME' (index ID 1):

            USED pages: Changed from (650358) to (650254) pages.

            RSVD pages: Changed from (1268066) to (650333) pages.

  • This was removed by the editor as SPAM

  • (Cut and paste most of this from a post I made two minutes ago...)

    Various internal statistics will get out of date over time, particularly if you're doing a lot of index-intensive work (such as reindexing). Running DBCC UPDATEUSAGE causes SQL to recalculate and reset the values to accurately reflect what's going on in the database.

    Internally, I suspect this doesn't have much effect. I'd guess it's just an artifact of calculated/duplicate data stored in sysIndexes, where the real data (IAM pages, guff like that) is stored elsewhere. In other words, sysIndexes may think the space is used, but the system isn't asking it when allocation work needs to be done. (Remember, this is a guess, not fact!)

    I am not aware of any table parameter that might impact this. In a typical warehouse environment, I'd guess you're doing a lot of index work. (Do you load a lot of data that might require a lot of index page work? Do you drop and recreate indexes?)

    Philip

  • Thank you Philip.  Yes - we do a daily ETL - and after that is done, we drop and recreate indexes.

    Now the problem is within the ETL process.  We are using PeopleSoft App Engine program which does the following:

    1.  Deletes data from Table A (and commits)

    2.  Inserts data into Table A (and commits)

    3.  Updates data on Table A while doing an inner join with Table B.

    We have a 40GB TempDB datafile and 2GB TempDB logfile which fillsup with the update statement.  Size of TableA=1.5GB and Size of TableB=500MB

    On reindexing the 2 tables, updating usage and stats the update transaction runs successfully. 

    One way of fixing this would be to add an index to Table B.  Another option is to reindex, updateusage and update stats just before the Update statment within the program so that if runs successfully.

    What I was interested was - if there is a global parameter which we can set - where in, after the delete and Insert, SQL Server automatically detects the changes and updates the sysindexes table with the correct info for the update statement (#3) to run successfully.

    Appreciate any other insight/options that you can provide.

     

    Thanks

    Krish

  • I'm not too clear on the issue/problem you're having. You drop and insert data in table A, and then update A based on B (the inner join). So is the problem that doing the update works poorly without reindexing/usage updating A first? If so, what is the poor performance? (You imply the update is unsuccesful unless the updates are done.)

    A vague thing that suggests itself is statistics auto updating. Is this set on in the database? Does the poor performance indicate that SQL is not using an index that would optimize the work? UPDATE STATISTIC is the manual command, but if you've got the option of slipping your owsn code in, reindexing or update usage work is probably the way to go. (Massive table modifications should trigger statistic auto updates--but I don't know when the work is actually performed, whether immediately or upon the next quiescent period. I think SQL Profiler's got a counter or two that might help determine that.) The above is all guesstimation; there's really not info here to tell if this is an issue.

    Any idea how you get a 40G tempdb from work on a 1.5G table? Does tempdb always bloat up to 40G whenever you run this, or is that 40G "leftover" from some bygone query? (Don't know if this is pertinent, but it's suggestive of something odd going on behind the scene.)

    Philip

  • Thank you for your response Philip - appreciate your time.  Please see my response in BLUE/RED

    I'm not too clear on the issue/problem you're having. You drop and insert data in table A, and then update A based on B (the inner join). So is the problem that doing the update works poorly without reindexing/usage updating A first? YES

    If so, what is the poor performance? (You imply the update is unsuccesful unless the updates are done.) I get a message saying log file on tempdb is full after the statement processes for around 30 minutes or so(The log file for database 'tempdb' is full. Back up the transaction log for the database to free up some log space. (SQLSTATE 37000) 9002

    A vague thing that suggests itself is statistics auto updating. Is this set on in the database? YES

    Does the poor performance indicate that SQL is not using an index that would optimize the work? YES and NO.  Reason being, when I run the sp_spaceused on Table A before updating the usage , I get a negative value in the unused column (i.e -32 in this case) and after I run the update usage, I get a positive value (i.e 56 in this case)

    sp_spaceused FDPT_DIM

    PS_IN_FDPT_DIM 1022990    1355208 KB 1345200 KB 10040 KB -32 KB (Before)

    FDPT_DIM 1022990     1374064 KB 1363992 KB 10016 KB 56 KB (After)

     Also on Table B, I noticed the RSVD pages came down by 50% from 132K to 66K pages (one which is not indexed)

    DBCC UPDATEUSAGE: sysindexes row updated for table 'FIN_HIER' (index ID 1):

            USED pages: Changed from (66141) to (66107) pages.

            RSVD pages: Changed from (132269) to (66134) pages.

    Trying to understand the internals of this process - but looks like the meta data is out of sync with these kind of DML's

  • More info leads, as ever, to more questions...

    Are you able to tell: does just running update usage cause the query to perform properly?

    Can you replicate things by running the processes "by hand"? There'd be value in seeing the execution plan (as graphically displayed in Query Analyzer) generated for the "UPDATE A based on B" statement, comparing the "bad run" vs. the "good run" configurations.

    The fact that the error is "out of space in tempdb log" is probably key. First obvious question is, are you out of space on the hard drive where the file is, or have you arbitrariliy limited the file size? Obviously it's logging the update transaction... but why it would overflow "before" the update but not "after" is confounding--you'd think the same volume of data was beind updated, regardless of how it was being read. (This is why analyzing the query plan seems called for.)

    Philip

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply