March 3, 2011 at 2:11 pm
windows 2008 R2 64 bit running sql 2005 sp3 64 bit
24 CPus, 65 Gb memory
Tempdb is configured with :
16 - data files 512Mb each with no growth option
1 – 8192 Mb logfile with no growth option
Database size is 400 mb
When running a specific report query which joins a huge a number of Views among themselves which already have a significant ammount of joins inside those views.
I know sounds VERY bad.
That querry is gfailing with the error below. I looked at the execution plan and noticed that “Sorting” is a bigest contributor.
Which all makes sence since all the ‘sorting’ is happanning in the tempdb.
My question is how’s that with database size around 400Mb the tempdb is running out of space(see config above)
Even if it is creates a temp table for that ‘sort’ output…if it’s bigger than 512MB isn’t it supposed rollover to anotherext tempdb file or better yet
Spread that temp table data across all 16 tempdb files????
Please help!!!!!
Msg 1105, Level 17, State 2, Line 1
Could not allocate space for object 'dbo.SORT temporary run storage: 142940294283264' in database 'tempdb' because the 'PRIMARY' filegroup is full.
Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.
March 3, 2011 at 2:33 pm
If you have many views with many jpins, the amount in tembdb is quite likely significantly larger than the database size.
SQL Server will use all of the space that tempdb has available.
Why in the world is tempdb configured like that? It's a certain recipe for disaster.
March 4, 2011 at 7:01 am
I agree with Pam. 512 MB looks a little bit low for a tempdb file. Set files to autogrowth (for example up to 2 GB each file, adjust if it's still not enough) and then see how Tempdb will grow. After that you can adjust initial Tempdb size to this value.
March 4, 2011 at 7:46 am
But the whole reason for multiple files is to spread the load between all those tempdb files.
And in total it’s equals to 8 GB for data and 8 GB for Log, considering that our biggest prod database is only 400 – 500 megs
That should be enough… don’t you agree?
Also, as per Microsoft’s ‘best practices’ for tempdb config and placement –
1. Number of data files = number of CPUs
2. All the same size
3. Auto Growth – disabled
I actually was monitoring tempdb usage while I reran that ‘bad’ query…and it grew and filled up the entire allocated for tempdb space…. all 8 GB.
March 5, 2011 at 9:51 am
Yes, you can set fixed size later, but first you need to determine what is the right fixed size for you environment. Plus add some extra space for growth (20% as per microsoft recommendation here http://technet.microsoft.com/en-us/library/cc966545.aspx). The same article has tempdb sizing recommendations.
March 5, 2011 at 10:38 am
just my 2 ct to get an idea on how many GB you'd need:
Save your execution plan and open it with sqlcentry (free) plan explorer .
It will give you a better idea on the data volume needed to serve your query.
edited:
of course depending on the accuracy of your statistics :ermm:
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
March 5, 2011 at 10:42 am
Let's address the real problem here...
You have an 8GB TempDB and that should be more than enough for virtually any query you may have in a 400MB database. The problem is the code. Somewhere in at least one of your queries, you have an "accidental cross-join" in the form of a many-to-many join. People usually, but not always, try to over come the "duplicates" this will create by using DISTINCT or maybe even GROUP BY.
Since you can't actually run the query because it runs TempDB out of space, you can't look for "fat" arrows in an actual execution plan which is the normal dead give-away for such "accidental cross-joins" and an estimated execution plan may not show the problem (although it IS a good place to start).
So, you're going to have to "do it by the numbers". Start by checking the actual execution plan of each view. Look for "fat" arrows where the number of rows indicated by an arrow is larger than any of the tables being referenced. You also need to check the views and make sure that no "calculated" columns (aggregate or otherwise but especially aggregates) are being used for a join criteria (in an ON or WHERE clause) or a "limit" criteria in a WHERE clause because those will require that the entire view be "materialized" in TempDB prior to filtering by criteria the same as non-SARGable expressions in criteria.
This won't be fun but it's absolutely necessary.. I'll also tell you that no amount of hardware that you may throw at such problems will help and additional indexing may not help either.
I'll also say that a whole lot of people think that high performance, set based code must be "all in one query" (views are considered to be a part of the query) and nothing could be further from the truth. "All in one" technology makes it difficult to find such performance problems and frequently is the only reason for such performance problems. It would be far better to create a stored procedure that may store interim results in Temp Tables in a thoughtful "Divide'n'Conquer" manner. As for any DBA that refuses the idea of using Temp Tables because of supposed pressure on TempDB, consider what the current query is doing to TempDB. I've turned many a 45 minute run into a 3 second run using the very "Divide'n'Conquer" methodology that I'm speaking of.
--Jeff Moden
Change is inevitable... Change for the better is not.
March 6, 2011 at 9:55 am
With this server configuration, I have to assume you either have multiple instances - or you have many more databases in that instance that just this one database.
If this is a single instance and you have many more databases, I am very confident that you have probably undersized tempdb.
If this is a multi-instance system with a single database, then you probably have oversized tempdb for this instance and needlessly created multiple files for tempdb where they are not necessary.
Is your tempdb data files on a dedicated LUN? Is that dedicated LUN actually on dedicated spindles - or is it shared storage?
Either way, Jeff Moden is correct - your problem is the query and is where you should focus your efforts.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
March 7, 2011 at 6:38 am
1) 16 files is almost certainly unnecessary, and unless they are on separate spindles could be causing IO performance problems due to excessive head thrashing.
2) why no autogrowth on those tempdb files?
3) I guarantee you that I can fill up an 8GB tempdb with data from a 400MB database for a SORT operation - it is called a cartesian product. This is VERY easy to do when you have umpteen-nested views to deal with. Take a look at your estimated query plan for your query - you might get lucky and see a VERY fat line there. That could lead you to the issue. But it isn't guaranteed (since it is an ESTIMATEd query plan - not what actually happens). Also look at the EP for an indication of a missing join criteria. If you don't find that, then you have no choice but to dig into the views and look for the cartesian manually.
4) There is a bug in the engine concerning tlog files built in increments of 4GB. Search sqlskills.com for information. 8GB is also too big for a single tlog - too few VLFs can cause other issues outside of the known bug.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
March 7, 2011 at 8:23 am
TheSQLGuru (3/7/2011)
1)3) I guarantee you that I can fill up an 8GB tempdb with data from a 400MB database for a SORT operation - it is called a cartesian product. This is VERY easy to do when you have umpteen-nested views to deal with. Take a look at your estimated query plan for your query - you might get lucky and see a VERY fat line there. That could lead you to the issue. But it isn't guaranteed (since it is an ESTIMATEd query plan - not what actually happens). Also look at the EP for an indication of a missing join criteria. If you don't find that, then you have no choice but to dig into the views and look for the cartesian manually.
Great minds DO think alike. 😀
--Jeff Moden
Change is inevitable... Change for the better is not.
March 7, 2011 at 8:37 am
Jeff Moden (3/7/2011)
TheSQLGuru (3/7/2011)
1)3) I guarantee you that I can fill up an 8GB tempdb with data from a 400MB database for a SORT operation - it is called a cartesian product. This is VERY easy to do when you have umpteen-nested views to deal with. Take a look at your estimated query plan for your query - you might get lucky and see a VERY fat line there. That could lead you to the issue. But it isn't guaranteed (since it is an ESTIMATEd query plan - not what actually happens). Also look at the EP for an indication of a missing join criteria. If you don't find that, then you have no choice but to dig into the views and look for the cartesian manually.Great minds DO think alike. 😀
Ahh, you called it a 'cross-join'!
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
March 7, 2011 at 9:45 am
"Accidental Cross Join" and Many-to-Many join
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply