May 5, 2008 at 7:36 am
You can't add more filegroups to TempDB. Just more files (tempDB1.ndf. tempdb2.ndf, ...)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 5, 2008 at 7:40 am
Anyway, I know that I should improve my knowledge as soon as possible; do you have any suggestion? I'm studying books but they don't give me experience.
My knowledge started with taking some classes that your employer should pay for. I took the Administering SQL Server and then Programming SQL Server from New Horizons. There are many training facilities that provide Microsoft training; I personally only have experience with New Horizons and was generally pleased.
These training courses will provide real-world examples as well as how to solve the problem. My instructors had real-world experience as well so I was able to bring my issues to class and get the help I needed to solve my specific problems. After you have these courses behind you, communicating in a forum such as this one become easier since you have a good understanding of the foundation of the product. The classes I took were for SQL Server 2000 and the knowledge learned from those classes easily transferred to SQL Server 2005 when we upgraded so I didn't have to take any further classes.
May 5, 2008 at 8:08 am
GilaMonster (5/5/2008)
Here's something:
Thanks, Gail... That's one of the ones I was trying to find... I can't find the other one that explains the "why" of having one file for each CPU and what the advantage is (I really gotta start saving these things).
--Jeff Moden
Change is inevitable... Change for the better is not.
May 5, 2008 at 8:15 am
Pleasure. I don't bother bookmarking all the white papers, just use this rather.
http://www.sqlskills.com/whitepapers.asp
I don't know offhand of the article on why 1 file/cpu. Perhaps on the PSS engineer's blog? http://blogs.msdn.com/psssql
Soem discussion here - http://blogs.msdn.com/psssql/archive/2007/02/21/sql-server-urban-legends-discussed.aspx
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 5, 2008 at 9:05 am
Thanks re the ndf files...
For training - yes i would recommend new horizons i did that course for SQL 2005.
Forums are good too - people always willing to help and share same experiences offer new methods and what to check out for...I learnt more from forums then reading a book.... Thanks to everyone that writes it....great way of learning.
May 5, 2008 at 9:06 am
GilaMonster (5/5/2008)
Here's something:Working with tempdb in SQL Server 2005
An excerpt...
tempdb supports only one data filegroup and one log filegroup. By default, the number of files is set to 1. Multiple files can be created for each filegroup. Adding more data files may help to solve potential performance problems that are due to I/O operations. Increasing the number of files helps to avoid a latch contention on allocation pages (manifested as a UP-latch). The recommended solution is to set the number of files to match the number of CPUs that are configured for the instance. This is only a recommendation—the number of files might not be the same as the number of CPUs.
Having too many files increases the cost of file switching, requires more IAM pages, and increases the manageability overhead. The size of each file is configurable. More files can be added, moved, or removed. To better use the allocation mechanism (proportional fill), the size of the files should be equal. Generally, for SQL Server 2005 it is recommended that you create tempdb files striped across fast disks.
Before the case study portion the this white paper the following configurations are recommended
• Create the tempdb database on a fast I/O subsystem. Use disk striping to numerous directly attached disks.
• Separate the disk space dedicated to tempdb from user databases.
• Create many files to maximize disk bandwidth and to reduce contention in allocation structures. As a general guideline, create one data file per CPU. Each file should be set to the same size. This enables the proportional fill algorithm to distribute the allocation load uniformly with minimal contention.
Looking at these recommendations, I am not sold on the last recommendation. Does anyone know if a whitepaper exists that tested this recommendation. How would a SAN or VMware affect the recommendations? I suspect that the recommendations would be the same for VM, but I would like to see test results if someone has tested any of these.
Q
Please take a number. Now serving emergency 1,203,894
May 5, 2008 at 11:31 am
INSERTS are slow? Check for the existence of insert triggers on the tables where the data is being inserted.
May 5, 2008 at 12:49 pm
Q (5/5/2008)
• Create many files to maximize disk bandwidth and to reduce contention in allocation structures. As a general guideline, create one data file per CPU. Each file should be set to the same size. This enables the proportional fill algorithm to distribute the allocation load uniformly with minimal contention.
Looking at these recommendations, I am not sold on the last recommendation. Does anyone know if a whitepaper exists that tested this recommendation. How would a SAN or VMware affect the recommendations? I suspect that the recommendations would be the same for VM, but I would like to see test results if someone has tested any of these.
No whitepaper that I know of, but I have personal experience of before and after.
I have a system that uses temp tables heavily. 12 CPUs. With only 1 file for TempDB, I often saw waits in sysprocesses, sometimes quite long ones, as a page latch on 2:1:3 (one of the allocation structures in TempDB)
We split the tempDB into 6 files, all on the same disk over 1 weekend and I have not since seen a single wait on that allocation page.
Dunno about VM, but SAN or no SAN, tempdb files should be on fast drives
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 5, 2008 at 1:14 pm
So you have the mdf, ldf and 4 other ndf what sizes did you make your temp files and did you set up a max growth size ...
I will look out for the page latch ....2:1:3 in sysprocesses.
May 5, 2008 at 1:21 pm
Depends how much usage your tempDB gets. How big is the file set for now?
For the proportional fill to work, all the files should be the same size, so autogrow should be disabled. Hence you need to make very sure that the files are properly sized.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 5, 2008 at 1:27 pm
4 GIG right now...........i have seen it go to 118 gig nice little runaway script lol.
I do some invesitgation and run the various growth and see what they grew to on a busy day.
I got 2 TB so i could do 10 GIG for each one but no need to go overboard...First i will determine if we got the page latch 2:1:3 .
Thanks for advice.
May 6, 2008 at 2:59 am
jim.powers (5/5/2008)
Anyway, I know that I should improve my knowledge as soon as possible; do you have any suggestion? I'm studying books but they don't give me experience.
My knowledge started with taking some classes that your employer should pay for. I took the Administering SQL Server and then Programming SQL Server from New Horizons. There are many training facilities that provide Microsoft training; I personally only have experience with New Horizons and was generally pleased.
These training courses will provide real-world examples as well as how to solve the problem. My instructors had real-world experience as well so I was able to bring my issues to class and get the help I needed to solve my specific problems. After you have these courses behind you, communicating in a forum such as this one become easier since you have a good understanding of the foundation of the product. The classes I took were for SQL Server 2000 and the knowledge learned from those classes easily transferred to SQL Server 2005 when we upgraded so I didn't have to take any further classes.
Thanks Jim.
My employer hasn't thought of sending me to classes yet.Maybe, he would soon 😎
By the way, I'll try finding a good and especially a cheap way to participate in good SQL classes
May 6, 2008 at 3:24 am
TRACEY (5/5/2008)
Thanks re the ndf files...For training - yes i would recommend new horizons i did that course for SQL 2005.
Forums are good too - people always willing to help and share same experiences offer new methods and what to check out for...I learnt more from forums then reading a book.... Thanks to everyone that writes it....great way of learning.
New horizons is good for good people like you who live in good countries not me 🙂 cause I live in a country, whose boss barks on every one :hehe: and the drawbacks turn to me and prevents me from having credit cards so I have to read books that are free ... anyway, I do learn much from forums too Thanks a million to all of you 🙂
May 6, 2008 at 6:21 am
Hi go to here http://safari.peachpit.com/
It cost 20$ a month and you can read up to 10 books at a time......Any SQL book any thing you want...Its the best 20 bucks i have spent...If i had known this when i was at college i could have saved some money.
May 6, 2008 at 7:30 am
GilaMonster (5/5/2008)
Q (5/5/2008)
• Create many files to maximize disk bandwidth and to reduce contention in allocation structures. As a general guideline, create one data file per CPU. Each file should be set to the same size. This enables the proportional fill algorithm to distribute the allocation load uniformly with minimal contention.
Looking at these recommendations, I am not sold on the last recommendation. Does anyone know if a whitepaper exists that tested this recommendation. How would a SAN or VMware affect the recommendations? I suspect that the recommendations would be the same for VM, but I would like to see test results if someone has tested any of these.
No whitepaper that I know of, but I have personal experience of before and after.
I have a system that uses temp tables heavily. 12 CPUs. With only 1 file for TempDB, I often saw waits in sysprocesses, sometimes quite long ones, as a page latch on 2:1:3 (one of the allocation structures in TempDB)
We split the tempDB into 6 files, all on the same disk over 1 weekend and I have not since seen a single wait on that allocation page.
Dunno about VM, but SAN or no SAN, tempdb files should be on fast drives
GM,
I appreciate the feedback. It appears to me that you have a very large database or multiple large databases using temp db. Are you only using 6 of the 12 CPU's for SQL Server and that is how you determined the number of files to create? I am assuming you are using 32 bit Enterprise Edition on Windows Server 2003. Do you have a cluster?
Q
Please take a number. Now serving emergency 1,203,894
Viewing 15 posts - 31 through 45 (of 64 total)
You must be logged in to reply to this topic. Login to reply