May 6, 2008 at 7:58 am
2 very big databases both used by 1 app. Very heavy temp table usage.
All 12 processors used by SQL. 64bit Itanium Enterprise edition on Server 2003 Datacenter. It was SQL 2000 at the time.
Cluster, 2 nodes, active/passive.
We decided to split tempDB into 6 files as a first step and, if we were still seeing problems, split again to 12. It proved not necessary.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 6, 2008 at 9:48 pm
Jeff Moden (5/4/2008)
jeff.williams3188 (5/4/2008)
Jeff - can you provide a link to where it is suggested that you have a separate drive for each tempdb file? I was not aware of that condition and, in fact, think it is wrong.I don't see anything in the the URL you posted to contradict what I said. That, not with standing, I'll have to see if I can find the MS article that recommends what I said 'cause you're right... gotta have backup for everything said.
Actually, I was only looking at the recommendation for a separate drive for each file. And I have to agree with Gila Monster - if you can afford it that it would be the most optimal solution, but with the new virtual arrays (HP EVA, Netapp, etc...) why would you?
Jeff
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
May 6, 2008 at 10:45 pm
Ah... understood.
Dunno why, but ever since I had a boss that wanted to pay me with "virtual money", I've been put off by the word "virtual". 😛 Haven't gotten to play near a virtual array... what's their advantage over a traditional disk farm?
--Jeff Moden
Change is inevitable... Change for the better is not.
May 7, 2008 at 2:36 pm
Maybe by giving you an example for two SAN's I work with:
HP EVA 5000
Disk Group 1 has 104 72GB/15K drives
Disk Group 2 has 24 300GB/10K drives
When I need a LUN presented to a server, I just ask for a 400GB LUN (with a defined RAID level) to be carved out of disk group 1. That LUN will use all 104 drives and is presented to the DB server for the data files. I then ask for a 100GB LUN from the same disk group - it will also use all 104 drives and is presented to the DB server for the log files.
Netapp Storage - uses a different method, but ends up being the same thing. On the Netapp, you define an aggregate group of disks (with a defined RAID level). You then define LUN's (again, say 400GB), present to the DB server for your data files - and all of the disk defined in the aggregate group will be used (I think we have up to 60 144GB drives in our aggregates). I am not as involved in how the Netapp storage is built - so I really could not say how many aggregates are defined - or how that are allocated.
In either case, you have a lot of spindles available and the storage array manages it all for you.
Jeff
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
May 7, 2008 at 5:40 pm
Apologies if it has been mentioned in this thread already, but you will want your tempdb data files isolated on their own physical device (RAID 1 if possible), let them occupy the entire drive with a couple of GB free space and turn autogrowth off on tempdb. You should have as many tempdb data files as CPUs on your server.
__________________________________________________________________________________
SQL Server 2016 Columnstore Index Enhancements - System Views for Disk-Based Tables[/url]
Persisting SQL Server Index-Usage Statistics with MERGE[/url]
Turbocharge Your Database Maintenance With Service Broker: Part 2[/url]
May 8, 2008 at 1:00 am
jeff.williams3188 (5/7/2008)
Maybe by giving you an example for two SAN's I work with:HP EVA 5000
Disk Group 1 has 104 72GB/15K drives
Disk Group 2 has 24 300GB/10K drives
When I need a LUN presented to a server, I just ask for a 400GB LUN (with a defined RAID level) to be carved out of disk group 1. That LUN will use all 104 drives and is presented to the DB server for the data files. I then ask for a 100GB LUN from the same disk group - it will also use all 104 drives and is presented to the DB server for the log files.
Netapp Storage - uses a different method, but ends up being the same thing. On the Netapp, you define an aggregate group of disks (with a defined RAID level). You then define LUN's (again, say 400GB), present to the DB server for your data files - and all of the disk defined in the aggregate group will be used (I think we have up to 60 144GB drives in our aggregates). I am not as involved in how the Netapp storage is built - so I really could not say how many aggregates are defined - or how that are allocated.
In either case, you have a lot of spindles available and the storage array manages it all for you.
Jeff
Outstanding... thanks for taking the time to explain, Jeff.
--Jeff Moden
Change is inevitable... Change for the better is not.
February 4, 2009 at 1:55 am
Jeff Moden (4/26/2008)
The first 3 things I'd do is...1. Update to sp2 and apply the latest cumulitive update (#7, I believe).
2. Turn off autoshrink. If it keeps growing, it's because it needs to. Set it to a reasonable size and leave it alone. Everytime it grows, it frags your database and, possibly, the underlying file on the operating system. Set it, degrag the DB, and get the boys in OPS to defrag the hard disk. Don't ever use a percentage growth... set it to some reasonable fixed value.
3. Same thing with TempDB... set it to something large on boot up. We have a tera-byte database and we set TempDB to 12 gig. It takes 73 fragments using the defaults of 10% growth and 1 meg starting size just to grow to 1 Gig. Avoid that and set TempDB to be a couple of gig on bootup.
I'm back with another performance problem 😛 (The SQLserver is slow and not the OS. and there are over 200 connections at one time on same database)
Shall I change AutoGrowth property of mdf and ldf file for better performance as well?
I've created a profiler trace using tuning template and analyzed that to get information about reindexing. what other template do you suggest for better result?
By the way, our server is too slow to run trace on. Running it once had too much complains from stakeholders. Do you have any other suggestion to resolve performance problems soon and better?
One more question, tempdb mdf and ldf files are stored in drive c, while I have not enough space in drive c. IS there any problem if I transfer them to another drive? how shall I do that? It doesn't let me modify the location of files in tempdb properties window!
February 4, 2009 at 8:50 am
I couldnt see why you want move tempdb to Disk D unless you are out of space on C.
Moving tempdb to difference LAN Partitions is whole another subject but since it seems your on a single server no point..there is not gain.
how many processors do you have enough tempdb per processor.
What transactions are running, you can right click the servername and see active running jobs...
Are you your tables indexes.
Leave auto growth alone, if your databases are grewing 1 gig an hour then perhaps you can change it to grow 5 GIG each time...growing a database randomly during day is not perferred and put additional overhead on system that is not necessary. You need to monitor the db growth to determine if this is the case.
Are you DB in full/simple mode, if full mode are you running log backups every x minutes.
What is the memory have you max it out?
February 7, 2009 at 1:52 am
Thanks for the detailed comment 🙂
I couldnt see why you want move tempdb to Disk D unless you are out of space on C.Moving tempdb to difference LAN Partitions is whole another subject but since it seems your on a single server no point..there is not gain.
However, it doesn't let me change the properties, is there any other way to do that?
how many processors do you have enough tempdb per processor.
How can I know that?
Are you your tables indexes.
didn't get that sorry
February 7, 2009 at 2:10 am
peace2007 (2/7/2009)
However, it doesn't let me change the properties, is there any other way to do that?
Alter database. Check BoL, it has an example on moving tempdb on the alter database page.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
February 7, 2009 at 4:04 am
Here are a couple of samples for configuring tempdb files:
To modify a file - move it, change its size etc.:
ALTER DATABASE tempdb
MODIFY FILE (NAME = tempdev
,FILENAME = 'd:\someFolder\tempdb.mdf'
,SIZE = 11000 MB
,FILEGROWTH = 0 )
go
To add a new file:
ALTER DATABASE tempdb
add FILE (NAME = tempdev4
,FILENAME = 'd:\someFolder\tempdb4.ndf'
,SIZE = 11000 MB
,FILEGROWTH = 0 )
go
Run a script like the ones above to modify your tempdb files.
Restart your instance.
Verify your tempdb configuration has changed:
use tempdb
go
sp_helpfile
go
Go to the old tempdb location and physically remove the old tempdb files, if your procedure involved moving them to another location.
__________________________________________________________________________________
SQL Server 2016 Columnstore Index Enhancements - System Views for Disk-Based Tables[/url]
Persisting SQL Server Index-Usage Statistics with MERGE[/url]
Turbocharge Your Database Maintenance With Service Broker: Part 2[/url]
February 7, 2009 at 4:14 am
Thanks a lot Marios 🙂
IS there any risk in configuring tempdb while there are so many concurrent transactions on databases? Or, I shall do that when no one is connected to the database
February 7, 2009 at 7:49 am
You have to shutdown the server after u change tempdb so that you know it is picking up the right location...best to do when no one on there.
February 7, 2009 at 8:06 am
peace2007 (2/7/2009)
Thanks a lot Marios 🙂IS there any risk in configuring tempdb while there are so many concurrent transactions on databases? Or, I shall do that when no one is connected to the database
Depends on what you are doing with tempdb.
If you are simply increasing the file size or changing the autogrowth setting on existing files you can get away without an instance restart.
If you are moving a file to another location, you definitely need to restart your instance, so that's an after-hours operation, and you need to notify your users in advance.
Right after configuring the tempdb files through scripts like the ones I posted, SQL Server will notify you whether an instance restart is necessary to activate the changes. I would go with that.
If an instance restart is not necessary, your users will likely not notice anything.
__________________________________________________________________________________
SQL Server 2016 Columnstore Index Enhancements - System Views for Disk-Based Tables[/url]
Persisting SQL Server Index-Usage Statistics with MERGE[/url]
Turbocharge Your Database Maintenance With Service Broker: Part 2[/url]
February 8, 2009 at 12:21 am
Thanks again Marios:)
Viewing 15 posts - 46 through 60 (of 64 total)
You must be logged in to reply to this topic. Login to reply