data file for tempdb very huge

  • hi all,

    Our data files for tempdb is simply growing out of proportion. Currently the size of tempdb is 50GB.

    The result of sp_helpdb is as follows

    Name filegroup size maxsize growth usage

    -------------------- --------------------------------------------------

    Tempdev PRIMARY 8706496 KB Unlimited 10% data only

    templog NULL 512 KB Unlimited 10% log only

    tempdev1 PRIMARY 41530816 KB Unlimited 10% data only

    the result of this query is as follows

    SELECT sum(unallocated_extent_page_count) [Free_Pages],

    (sum(unallocated_extent_page_count)*1.0/128) [Free_Space_MB]

    FROM sys.dm_db_file_space_usage

    Free_Pages Free_Space_MB

    -------------------- ---------------------------------------

    6279192 49056.187500

    I am trying to shrink the files it is not releasing any free space.

    Query result from sys.dm_exec_sessions join sys.databases does not return any rows. If I restart the sql server tempdb return to its normal size (10 MB).

    after which again if we run a processes to load data from as400 system to sql server 2005 it reaches this stage. (this runs for 4 hours. Basically from as400 we generate a dat file and then bcp it to sql server 2005. these intermediate tables are then used to populate the final table.this is mostly done by using select into tablename clause. we hardly used temp tables (#tablename))

    what I can’t figure out is which is the step/query which is making tempdb so full.even after the process completes why is tempdb not releasing the space to the system.

    Thanks a lot for your help.

    Sharon

  • Click here[/url] and here for articles on how to manage tempdb growth.

    Also, if your tempdb has grown to that big a size, then something must have run that needs that space. The only question is if it was a one-off process.

    To find that out, you could setup a server side trace on database growth for the tempdb database...

    HTH...

    The_SQL_DBA
    MCTS

    "Quality is never an accident; it is always the result of high intention, sincere effort, intelligent direction and skillful execution; it represents the wise choice of many alternatives."

  • Try running DBCC FREESYSTEMCACHE('ALL') in tempdb and then shrink the file.Check if it works.

  • To optimize your tempdb size, two things should be considered:

    1) the estimate size for tempdb data files for daily operation, and

    2) the required tempdb size for unplanned exceptions.

    If the situation you described is a daily application, you then likely have to allocate 50 GB of space to the tempdb data files; if this is a rarely-run application, you can estimate the required tempdb size with running a regular application cycle, after returning the unused space to the OS.

    It is not recommended for enabling auto growth for all the data files in a multi-data file situation.

    Considering having two tempdb data files, if you do need 50 GB, I will set the first file to 25 GB with no growth and the second file to 25 GB with auto growth; if a smaller size, say 20 GB, is good for the routine, I will set the both files to 10 GB, allowing only the second file for auto growth. This setting should enhance the tempdb performance. For routine application cycles, the tempdb size should stay within the 50 or 20 GB level without need to grow; for an unplanned exception, the tempdb (the second file) can grow as needed, which ensures an unexceptional operation a success.

  • The_SQL_DBA (1/12/2010)


    Click here[/url] and here for articles on how to manage tempdb growth.

    Also, if your tempdb has grown to that big a size, then something must have run that needs that space. The only question is if it was a one-off process.

    To find that out, you could setup a server side trace on database growth for the tempdb database...

    HTH...

    Good suggestion on the trace... it'll help find the bad query that has the wrong constraints and is doing an accidental cross-join. Chances are it's a query with a large number of joins and someone who doesn't understand the data left out a criteria and it's doing a many to many join somewhere along the line.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • hi all,

    thanks for your response.

    i have set up trace file to track database growth. i am not able to figure out at which query the data / log file is growing.

    i have included stored procedures, TSQL, Database events in the trace file selection.

    thanks for your help.

    sharon.

  • Set up a trace to capture growth events.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Great work with out restart

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply