Excessive tempdb space required for DBCC CHECKDB?!

  • Hi,

    We have stubborn 1TB database that is requiring a lot more tempdb space than the other databases in our environment to run DBCC CHECKDB. We have now expanded tempdb way beyond the size of our other instances to 800GB and it is still failing with the below error:

    DBCC CHECKDB ([db]) WITH NO_INFOMSGS, ALL_ERRORMSGS, DATA_PURITY Msg 1105, Level 17, State 2, Server [server], Line 1 Could not allocate space for object 'dbo.SORT temporary run storage: 141217325383680' in database 'tempdb' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.

    I find this odd because we have many multi-terabyte databases that can run DBCC CHECKDB with 200GB or less tempdb space. So why would this one database continuously fail to run with 800GB?! Isn't that an excessive amount of tempdb space? What could it be about this database that would make it's space requirements so much higher than our other databases? Is there anything that can be done to make it require less space?

    This one operation on this one database is the only reason to allocate so much space to tempdb and it seems very waisteful. Please help! Thanks!

    PS. Runnign DBCC CHECKDB with estimate only states it would require 1.3TB of tempdb space! Yet we have a 4TB database on another instance that would only require 23MB! Whats the deal?!

  • Do you have a small number of large tables in this databases, whereas in your other databases, the data is more evenly distributed between tables? That's one possible explanation. Take a look at this, and see whether there's anything that's relevant to you (in fact, if you have time, set aside an afternoon to read everything that Paul's ever written on DBCC CHECKDB).

    John

  • Yeah, this database has about 25 tables and a few at the top are very large and it quickly goes down from there. Paul has written quite a bit about CHECKDB and I actually looked for the answer in his blog posts before coming here and didn't find anything.... so I'd have to do a bit of further digging on this to find the answer.

    Just curious off the top of your head you know why a small number of large tables would increase the space requirements? Do you have a link to a post specific to this that would be helpful? Also, what can be done to address this?

    Currently the space requirments for tempdb are as large or possibly larger than the whole database itself. I can't see why that would be the case?

    MORE INFO:

    Table sizes from largest to smallest:

    400GB

    160GB

    130GB

    70GB

    20GB

    10GB

    5GB

    1GB

    0.5GB

    0.3GB

    etc...

  • This is how I understand the page I linked to: the CHECKDB operation is broken down into batches (unless you have TF2562 set - do you?) in order to avoid blowing tempdb in one go. However, a large table is going to break the limit for a single batch, and a very large table is going to break the limit by a long way, and that's where you see it being expensive in terms of tempdb space. If you have LOB data in any of those tables, that will increase the space requirement still further.

    John

  • No, we don't have TF 2562 enabled. Did you link to a page? Not seeing it.

    However, looking at the largest table (400GB) it has 10 text columns in it. :crazy:

    I suppose that may have something to do with it. I looked up TF 2562 as I was not familiar with it and it says it may increase tempdb utilization to use as much at 10% of the databas size.... lol... I wish! That would be a large reduction to what we are currently experiencing. I'm wondering if we should turn it on just to see if it actually REDUCES the space requred. :laugh:

    Still find it pretty insane that we need so much tempdb space. But at least the table definition seems to support the idea that we would need more space than normal.

  • Ok, so here's the deal. This database is actually archive data only so I'm thinking of switching the DBCC on it to PHYSICAL_ONLY.

    This looks like it will significantly reduce tempdb space and prevent us from having to expand further. I know physical doesn't check EVERYTHING but it seems like it may be "good enough" given the nature of the database... Otherwise I'm not sure how to fix this other than expanding tempdb more and more. Not sure how else to work around this issue unless anyone has any other ideas?

    Thanks

  • Another option, if you do regular random test restores of your databases, is to run DBCC CHECKDB on the restored version. Yes, you'll still need the same amount of tempdb, but at least you'll only need to provide it on that server, rather than all servers that have tempdb-hogging databases on them.

    John

  • Numerous potential causes here. You REALLY need to review the stuff you can find with a web search on these words:

    argenis fernandez dbcc checkdb

    There are some serious magic-bullets available, and a few surprising causes and findings too.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • OK, thanks! I appreciate the feedback and help.

    I've looked through some of the suggested web searches and am going to try to break up the DBCC by tables. Not sure if the big table is still going to get me but worth a shot. Of course there is always partitioning as well, but not sure I'm ready for that. I don't have a situation where I have non-clustered indexes on sparse columns or or computed columns so no luck with those gotchas. Again, we may be ok with simpy doing PHYISCAL_ONLY checks on this database. We'll see how it goes.

  • Hi...

    I am curious how you got on with this all those years ago? I am asking because I have actually just hit an identical situation...  one database on one instance blowing out tempDB, while I have much larger databases that do not blow out tempDB.

    Not keen to make any big changes in production, so am about to try and replicate the problem in a Dev environment but was curious how you got on with finding a cause/solution for this.

    Cheers.

    David

     

    • This reply was modified 1 year, 1 month ago by  Dave-739983.
    • This reply was modified 1 year, 1 month ago by  Dave-739983.
  • I also got stuck with this error and have tried it and I got a solution as this:

    DBCC CHECKDB has the potential to consume a substantial amount of tempdb space during its operation. To address this concern, a feature called "WITH ESTIMATEONLY" allows you to request an estimation of the required tempdb space. The resulting output resembles the following:

    Estimated TEMPDB space needed for CHECKALLOC (KB)

    ————————————————-

    56

    (1 row(s) affected)

    Estimated TEMPDB space needed for CHECKTABLES (KB)

    ————————————————–

    3345

    (1 row(s) affected)

    DBCC execution completed. If DBCC printed error messages, contact your system administrator.

    It's important to note that this estimation does not represent the total space required to check the entire database comprehensively. DBCC CHECKDB does not perform a complete database check in one continuous process unless the new trace flag 2562 is used. Instead, it divides the database into manageable batches. The composition of these batches depends on specific limits:

    When there are 512 or more indexes within a batch.

    When the estimated size for facts within a batch surpasses 32MB.

    The smallest conceivable batch comprises a single table and all of its associated indexes. Consequently, exceptionally large tables may readily exceed the 32MB fact limit, or theoretically, the 512-index limit.

    I hope this could help you!!

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply