TempDB always auto-grow on restart

  • Hey there,

    I'm basically a developer with a strong SQL skill (by SQL skill I mean more T-SQL rather than DBA skill). Our clients have been struck with Timeout Expirations for a while, and I've now got the time to look into the matter.

    There are several stored proc generating the Timeouts, and I've been looking at one in particular. It generates a calendar in a temporary table, thus filling TempDB with about 40 MB of data, depending on the database. The stored proc runs in about 43 seconds the first time after a server reboot, and any other attempt afterwards resolves in about 5 or 6 seconds, even with DBCC FREEPROCCACHE, DBCC DROPCLEANBUFFER and such. After a while, I traced it back to TempDB being expanded on the first execution.

    To solve this problem, I went ahead and changed the initial TempDB size from 2 MB to 60 MB, hoping it wouldn't have to auto-grow. Much to my surprise, when I restarted my server once more and went to test it, TempDB auto-grew to about 100 MB, even though there was already 60 MB of free space. That got me scratching my head...

    After looking on the web and finding no solution to this problem (probably because I'm not searching with the right words), I turn to you. Is it a common problem that any DBA fixes day in day out?

    For what it's worth, the server's default collation is French_CI_AS (and so is TempDB's), but the database's collation is Latin1_General_CI_AI. I don't know if it makes any difference.

  • G'day Jonathon,

    It sounds like you've expanded the tempdb data file(s) but not the log file(s).

    The 40MB you're loading into the temporary table is presumably being loaded using "select into ..." or "insert into ... select ...". Both of these operations will be performed as a single transaction, so the tempdb log file will be expanded to accommodate the transaction, as well as the data file to accommodate the table. As tempdb is in Simple recovery mode (unless you've changed it from the default) the log file will be cleared shortly after the transaction is completed, so it's easy to miss what has happened under the covers.

    When you look at the properties of any database in SSMS/EM the database size is shown as a total of all the files comprising the database, including the log file.

    As far as the time difference between the first and subsequent executions, that's most likely to be because immediately after a restart the data has to be read from disk into the cache, whereas on subsequent executions, especially when there's little time between them, the data is already in cache so there's no disk-read overhead. It will also have the execution plan already worked out, so there won't be the compile time either, although the fact that there's no significant difference after clearing the cache suggest that's not a major factor.

    There will also be some overhead because of the file expansion as the file has to be initialized because you're running SQL2k (or SQL2k5/8 if the service account doesn't have "perform volume maintenance tasks" rights), although I'd be very worried about my server if it took ~30 seconds to initialize a 40MB file. Check the growth increments on the tempdb files (both data and log) too: the default is to grow by 10%, so for this operation it's had to grow the file several times, which will add further overhead, although if you've expanded the file to accommodate this transaction that will be less of an issue.

    The difference in the collation is probably adding some minor overhead too, but there's not much you can do to avoid that except by rebuilding the source database to match the server collation (drop and recreate the database then recreate all tables by without specifying collation or specifying the server collation). Let me guess though: it's a vendor-supplied database and you're not allowed to do that. You'll probably find the different collations will cause some "interesting" behaviour (the one that trips us up most often is checking for equality between a string in the database and one in a temporary table: I've only done it between default- and binary-collated strings but in that case the strings will never be equal).

    Anyway, I hope that all helps.

  • Hi Glenn,

    I indeed expanded the data file without touching the log file. Since the particular query is using the "INSERT INTO ... SELECT" syntax, your solution looked very promising. And lo and behold : it works now!

    I'm just wondering if, since the DB is in Simple Recovery, the log file will someday shrink below it's initial size. Is it a warranty that it won't ever be smaller?

    As for the impact of the file expansion and initialization, I'm not worried. The log file was at 1MB initial size with 10% increments, and the "server" is my development workstation, which isn't exactly the latest and greatest. It isn't that different from our client's setup, so much of them are running MSDE on machines I wouldn't even want if they gave them to me. It's quite a challenge to develop for that kind of usage, but we're doing pretty good so far 🙂

    As a final note, the collation problem is resolved by explicitly naming the Latin1_General_CI_AI collation on every varchar field when we create a temp table. That way we're sure that even if our product is installed on an existing SQL server with a weird collation, our comparisons won't crash because of collation incompatibilities.

    You've been a great help, thanks a lot!

  • G'day Jonathon,

    Glad to be of service.

    It's good to hear you're on top of the potential collation issue. It took me quite a while to work out what was going on when I tripped over the problem the first time.

    As far as the data/log files changing size:

    - The growth profile of any data/log file is controlled by the definition of the file. The two parameters that matter are FILEGROWTH, which defines the growth increment of the file, and MAXSIZE, which defines the maximum size a file can grow to. These are set on each file compriosing the database and can differ for different files if required.

    - File size reduction is controlled by the database parameter AUTO_SHRINK. This defaults to off, but can be set to on. It is a database setting: it cannot be set individually for each file. Setting AUTO_SHRINK to on is very dangerous on a production server, as there's no way of controlling when a shrink happens and a shrink operation can cause a lot of blocking as well as choking the I/O.

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply