August 17, 2010 at 12:34 pm
I've done some research and it seems like every person has a different recommendation as to the number of data files the tempdb should have.
Here's my scenaro - I have very ill-performing 3rd party application. It takes 40 some hours to do it's thing. The vendor has recommended that I create a tempdb data file for each core to see if that helps (There are 24 CPUs), per Microsoft's recommendation, of course. The tempdb mdf file is separated from all the other data files, but it is on a RAID5 disk (and the server administration team won't budge and change that). The tempdev file is about 1 GB big at the moment. Does it really make sense to divide the tempdb data files by 24? Or start smaller? Or won't it matter because it is on RAID5?
PS: I am not sure if any of this will help significantly, since none of the vendor's tables are indexed.:angry:
Thanks for your help and opinions in advance 🙂
August 17, 2010 at 1:04 pm
Paul Randal did some benchmarking for this in his blog.
Though his tests were not specifically for Tempdb, they support what I have read elsewhere. Performance gains tail off after 8 data files. Thus I would do no more than 8 files for the tempdb at this point. The only real way to know for sure is to test it on your systems though.
http://www.sqlskills.com/BLOGS/PAUL/post/Benchmarking-do-multiple-data-files-make-a-difference.aspx
In addition to that, I would check to make sure your logs are in good shape. Check to find how many VLFs you have. That could also have a significant impact.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
August 17, 2010 at 1:07 pm
DBAgal (8/17/2010)
I've done some research and it seems like every person has a different recommendation as to the number of data files the tempdb should have.Here's my scenaro - I have very ill-performing 3rd party application. It takes 40 some hours to do it's thing. The vendor has recommended that I create a tempdb data file for each core to see if that helps (There are 24 CPUs), per Microsoft's recommendation, of course. The tempdb mdf file is separated from all the other data files, but it is on a RAID5 disk (and the server administration team won't budge and change that). The tempdev file is about 1 GB big at the moment. Does it really make sense to divide the tempdb data files by 24? Or start smaller? Or won't it matter because it is on RAID5?
PS: I am not sure if any of this will help significantly, since none of the vendor's tables are indexed.:angry:
Thanks for your help and opinions in advance 🙂
I think the "one temp-datafile per core" is a recommendation traced back to the time cutting edge server offered about four cores.
I'll second previous post, start with eight.
On the other hand... no indexes at all? - I do not think the main performance issue relates to tempdb.
_____________________________________
Pablo (Paul) Berzukov
Author of Understanding Database Administration available at Amazon and other bookstores.
Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.August 17, 2010 at 1:11 pm
PaulB-TheOneAndOnly (8/17/2010)
DBAgal (8/17/2010)
I've done some research and it seems like every person has a different recommendation as to the number of data files the tempdb should have.Here's my scenaro - I have very ill-performing 3rd party application. It takes 40 some hours to do it's thing. The vendor has recommended that I create a tempdb data file for each core to see if that helps (There are 24 CPUs), per Microsoft's recommendation, of course. The tempdb mdf file is separated from all the other data files, but it is on a RAID5 disk (and the server administration team won't budge and change that). The tempdev file is about 1 GB big at the moment. Does it really make sense to divide the tempdb data files by 24? Or start smaller? Or won't it matter because it is on RAID5?
PS: I am not sure if any of this will help significantly, since none of the vendor's tables are indexed.:angry:
Thanks for your help and opinions in advance 🙂
I think the "one temp-datafile per core" is a recommendation traced back to the time cutting edge server offered about four cores.
I'll second previous post, start with eight.
On the other hand... no indexes at all? - I do not think the main performance issue relates to tempdb.
I have to agree that the tempdb is probably the least of the worries on this one. 40hrs to complete a job seems a bit overboard.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
August 17, 2010 at 1:13 pm
I agree, but the vendor is grasping at straws here.
The same vendor told me to disable auto create/update statistics on their database. Oh boy...
August 17, 2010 at 1:16 pm
Also, thanks for the suggestions 🙂
August 17, 2010 at 1:24 pm
How big is this database?
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
August 17, 2010 at 1:27 pm
CirquedeSQLeil (8/17/2010)
How big is this database?
Over 400 GB...
August 17, 2010 at 3:02 pm
The database is not too terribly large. This would lead to me question the process that is running and taking 40hrs. Number of records being processed and the method in which those records are processed will have a big impact (besides the indexes).
Could you describe the process or enlighten us in that regard?
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
August 17, 2010 at 3:53 pm
When your application allocates temp tables at a extremely high rate having more data files in tempdb can help.
There are internal data structures in tempdb data files that can become a point of contention and blocking when new temp tables are created, so increasing the number of files can help eliminate this because new temp tables are allocated to files on a round robin basis by data file size.
Paul Randal and others have discussed this situation in more depth than I could so you should read Paul's blog links from earlier posts on this thread.
I will say that I have had this situation one time, and creating new data files helped, but we eliminated the problem by changing the procedure that created hundreds of temp tables every time it ran.
August 18, 2010 at 12:28 pm
CirquedeSQLeil (8/17/2010)
The database is not too terribly large. This would lead to me question the process that is running and taking 40hrs. Number of records being processed and the method in which those records are processed will have a big impact (besides the indexes).Could you describe the process or enlighten us in that regard?
That's the thing. The process and the code are like a black box. The process itself is calculating sales compensation, but exactly how it does it, is a mystery.
Also, perfmon doesn't indicate that the server is being pegged during the process; there is no unusual activity. Hmm...
August 18, 2010 at 3:22 pm
DBAgal (8/18/2010)
CirquedeSQLeil (8/17/2010)
The database is not too terribly large. This would lead to me question the process that is running and taking 40hrs. Number of records being processed and the method in which those records are processed will have a big impact (besides the indexes).Could you describe the process or enlighten us in that regard?
That's the thing. The process and the code are like a black box. The process itself is calculating sales compensation, but exactly how it does it, is a mystery.
Also, perfmon doesn't indicate that the server is being pegged during the process; there is no unusual activity. Hmm...
Sounds like an inefficient looping routine. Maybe it would be good to capture a replay trace of the activity for 15 minutes or so while this process runs.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
August 31, 2010 at 9:59 am
I created 7 more temp files, for a total of 8, on the server in question last Friday. The long calc job ran over the weekend; there was no performance improvement. Most of the waiting is on PAGEIOLATCH_SH. There was only one statement that was blocked for a total of 3 seconds. I'm not even sure if adding more tempdb files will help the cause, but maybe adding more RAM will? The server currently has 64 GB.
Did anyone find that enabling Trace flag 1118 in SQL 2008 helps with tempdb scalability?
Thanks!
September 1, 2010 at 4:37 am
Hi,
Not sure TempDB is why the application is taking longer than 40hrs to complete its workload, unless system is experiencing PFS, SGAM and GAM page allocations within TempDB, which then would instigate potentially implementing the 1 equally sized file per core recommendation. This behavior normally happens when an application uses a high number of temporary table object, thus bottleneck in the allocation of these objects.
I would investigate performance issue a bit more, but if you want to indentify any TempDB bottleneck in terms of allocation contention, please use the following query to help:
select session_id, wait_duration_ms, resource_description
from sys.dm_os_waiting_tasks
where wait_type like 'PAGE%LATCH_%' and
resource_description like '2:%'
Hope this helps.
Phillip Cox
Viewing 15 posts - 1 through 15 (of 22 total)
You must be logged in to reply to this topic. Login to reply