July 19, 2019 at 1:09 pm
Hi
In order to improve performance on a sqlserver we have been instructed to increase the size of some temp datafiles, reduce the size of others and add some more files. In total we should have 6 mdf files all of the same size of 1600 MB.
So I have added 2 more required mdf files of 1600, increased 2 existing files to 1600 but I cannot seem to shrink two existing files which are quite large back to 1600 MB.
I was under the assumption this would be possible after a server restart but the server was restarted recently as part of windows patching and the files remained the same size despite me running a shrink command on them prior to server restart.
This is a production system so getting access to restart is very difficult. Looking for advice on how to change the size of the larger files tempdb files so all will be the same size.
thanks in advance
July 19, 2019 at 2:32 pm
To be totally honest, this will likely be a futile exercise.
First, unless you can guarantee that the MDF files will be located on separate physical disks, there will be little to be had in the area of performance gains, if any at all.
Second, even if such a thing IS guaranteed, if there's no planning as to how the data is to be split across the separate MDF files according to how the data is used, there may still be little or nothing to gain.
Third, if you bring other physical disks into play and SQL Server isn't the only thing using those disks, you could actually end up slowing SQL Server down, A LOT!
Last but not least, if you have SSDs, it's not going to matter so much. It'll turn out that most of what you done is just add complexity.
There are reasons, such as "Online Piecemeal Restores", to have multiple MDF files but performance usually isn't one of them.
But, don't listen to me... in fact, don't listen to anyone on this subject. Instead, do a test because "One good test result is worth a thousand expert opinions". You might end up saving yourself a whole lot of work that has little or no ROI. If it DOES result in some sort of wild increase in performance, you should write an article about what you did so others can test it and then, maybe, implement what you've done.
Shifting gears a bit to something that you SHOULD listen to is my favorite bit of advice. It's my mantra, in fact. "Performance is in the code... or not". If you're having performance issues, you should determine which code is actually causing the problem and fix it. It takes more time than the sometimes silly hardware and file "fixes" that people propose but it's usually MUCH more effective and has a MUCH higher ROI. Fixing your top 10 code problems will typically cause your server to operate much more efficiently.
And, remember... it's not usually your longest running queries that are the performance issue. It's usually the cumulative effect of queries that run tens of thousands of times per hour.
And example of the above, we had a query that ran in "only" 100 ms (the "only" is in quotes because, for what it did, I thought that was horrible). The problem was that it would recompile each and every time it ran and it did run tens of thousands of times per hour. The recompile time varied from 2 seconds to 22 SECONDS (that's NOT a typo... SECONDS!!!) EVERYTIME IT RAN!.
When we fixed the query so that it wouldn't recompile every time it was executed, it also started running in just several milliseconds.
--Jeff Moden
Change is inevitable... Change for the better is not.
July 19, 2019 at 2:54 pm
Yes agreed, it is most likely a futile exercise but still a requirement from the vendor, so the question is how to do it not why.
July 19, 2019 at 3:36 pm
Did you try to run DBCC FREEPROCCACHE before shrinking?
Also, check this: https://sqlsunday.com/2013/08/11/shrinking-tempdb-without-restarting-sql-server/
--Vadim R.
July 19, 2019 at 3:47 pm
CHECKPOINT;
GO
DBCC DROPCLEANBUFFERS;
GO
Is that something that can be done on a production system?
My understanding was it could be a bit risky ?
July 21, 2019 at 9:20 pm
Not sure if people actually read the question.
Best practice for TEMPDB is still to have multiple files. The general rule is one file per physical CPU up to 8 (not vCPU in hyper-threaded environments). This is because of the way SQL manages TempDB and "helps you avoid the GAM/SGAM/PFS page contention issues", and has nothing to do with disk IO. See Brent Ozar's article here: https://www.brentozar.com/archive/2016/01/cheat-sheet-how-to-configure-tempdb-for-microsoft-sql-server/
The reason you can't shrink the old files is because you can't shrink ANY database file to less than its "initial create size". I feel this is an issue in SQL Server and have always wondered why Microsoft have never resolved this. You would need to drop and recreate the old files, but you'll not be able to drop the first MDF (tempdev) as this contains the system tables.
Without knowing the size of the old files, my recommendations would be a bit general, and you will need to adjust them for your situation as practical.
I hope this gives you something you can work with.
Cheers
Leo
Nothing in life is ever so complicated that with a little work it can't be made more complicated.
Leo
Nothing in life is ever so complicated that with a little work it can't be made more complicated.
July 22, 2019 at 12:17 pm
Not entirely true, Leo... I've shrunk the files for TempDB just like any other file (although the last time I did so was way back in SS 2012). Contrary to popular previous beliefs, you don't need to reboot the server to do so and it isn't a sin to do so.
Why would you want to shrink TempDB, especially if it's on it's own drive and nothing else can be allocated there. The answer is simple... to catch bad code that inappropriately uses way too much of TempDB. You setup a "growth" alert and have it capture what's currently going on in the server. After that, it's pretty easy to isolate the bad code.
And TF 1117 isn't just the default for TempDB... you can't actually turn it off even when you need to temporarily do so. Why would you need to do so? If you want to copy a large amount of data with IDENTITY INSERT turned on, ALL of the data goes through TempDB and only to a single file but, if all the files grow at the same time, you may not actually have enough space allocated to the drive TempDB lives on to handle the growth of all 8 files instead of just the one file. I know this is true for two reasons... the is a fault listed on the new "connect" site and <drumroll please> it happened to me. It destroy the final action of a massive table migration and totally negated all of the work done for the migration for that weekend release.
Microsoft has to stop protecting idiots by doing things like a permanent TF 1117 on TempDB.
--Jeff Moden
Change is inevitable... Change for the better is not.
July 22, 2019 at 5:46 pm
Not entirely true, Leo... I've shrunk the files for TempDB just like any other file (although the last time I did so was way back in SS 2012). Contrary to popular previous beliefs, you don't need to reboot the server to do so and it isn't a sin to do so.
And if you do, you can shrink files to less than "initial size" since there really isn't such a thing maintained as "initial size" - it's just a label used (incorrectly) in SSMS. The documentation correctly mentions minimum size, how it's reset and how files can be shrunk to less than the size at creation.
Sue
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply