October 9, 2013 at 8:43 am
Hi,
I have a system with 1 Quad core CPU and 32GB of RAM.
The database files, the MDF only, are on a RAID 10 system (the LDF is on a RAID1 and tempdb on a SSD disk).
The database is for an ERP application that's quite write and read intensive.
Is it advisable to create an "extra" FILEGROUP and add another file (NDF) to the database and move some tables to that FILEGROUP?
Some queries use UNIONs between two large tables. If on table was on PRIMARY FILEGROUP and the other table on my extra FILEGROUP would there be any advantage?
Thanks,
Pedro
October 9, 2013 at 9:15 am
-------Bhuvnesh----------
I work only to learn Sql Server...though my company pays me for getting their stuff done;-)
October 9, 2013 at 9:30 am
First question, why are you splitting it, recoverability or performance? If performance, have you checked whether the current bottleneck is IO?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 9, 2013 at 11:29 am
PiMané (10/9/2013)
Some queries use UNIONs between two large tables. If on table was on PRIMARY FILEGROUP and the other table on my extra FILEGROUP would there be any advantage?
Unless you could guarantee that the two filegroups where on physically different drives/spindles, the answer is probably "No".
--Jeff Moden
Change is inevitable... Change for the better is not.
October 9, 2013 at 12:59 pm
GilaMonster (10/9/2013)
First question, why are you splitting it, recoverability or performance? If performance, have you checked whether the current bottleneck is IO?
Yes, the average stalls per write on tempdb is 60ms and read isn't much better...
The big SELECT .. FROM (SELECT ... FROM .. UNION ALL SELECT ... FROM ...) t ORDER BY ... probably uses loads of tempdb... Probably would get better results by putting tempdb on it's own LUN, probably SSD disk... (I mixed up two servers I'm checking :(... this one has tempdb on a RAID5 15K disks along with the data files...)
Thanks,
Pedro
October 9, 2013 at 2:01 pm
60ms isn't bad. Not perfect, but way better than I've seen in many cases.
If you're seeing IO bottleneck on TempDB, splitting the user DB into multiple files isn't going to do much.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 9, 2013 at 2:58 pm
GilaMonster (10/9/2013)
60ms isn't bad. Not perfect, but way better than I've seen in many cases.If you're seeing IO bottleneck on TempDB, splitting the user DB into multiple files isn't going to do much.
I think that time is mainly because it's raid 5, the parity check delays a bit...
The user databases have from 22ms to 9ms...
Tempdb has 4 500MB files but it's real size is already 2.5MB witch means it has grown over the estimated start size (2GB).
Since there are only 4 cores I'll change tempdb to 2 2GB files and see what happens...
Also the main user database has 100GB and it's file growth is set to....... 1MB. 1 single sales operation makes the database grow 5 times... I'll set that to 200MB witch leaves "room" for almost 2 months of data.
Thanks,
Pedro
October 10, 2013 at 3:19 am
GilaMonster (10/9/2013)
60ms isn't bad. Not perfect, but way better than I've seen in many cases.If you're seeing IO bottleneck on TempDB, splitting the user DB into multiple files isn't going to do much.
Read operations are much faster... 2,7ms...
Each tempdb file, of the 4, has:
* io stalls read ms:
3.162.979
3.310.405
3.221.457
2.897.618
* # reads:
1.232.588
1.231.180
1.231.815
1.231.330
* io stalls writes ms:
77.075.513
76.980.693
76.948.258
76.979.508
* # writes:
1.332.084
1.331.643
1.331.944
1.330.562
These stats have been taken from Set 25th at 6PM till Oct 8th at 10AM... 12 and 1/2 days with 2 weekends (4 days with low ERP usage). 12,5 day = 1.080.000.000 ms.... 30% of the time tempdb has been io stalling for writes! Is this right or since it's 4 different files is just 1/4 of 30% ?!
Thanks,
Pedro
October 10, 2013 at 4:37 am
Agreed with Jeff, it all depends on the desing of the LUNs.
---------------------------------------------------
"Thare are only 10 types of people in the world:
Those who understand binary, and those who don't."
October 11, 2013 at 3:49 pm
Gail and Jeff hit the nail on the head in the first two replies.
With the benefit of all the posts ahead of mine, I suspect your actual question is closer to "My IOStall for writes is much slower than for reads; what can I do to improve performance".
I can say that RAID5 parity with a modern controller is extremely unlikely to be the cause of the slowness; evidence from a TempDB (one data file, one log file, both on the same RAID5, with 6 dedicated SSD's) is that even with signficant usage, average IOStall for reads is 2ms, and average IOStall for writes is also 2ms (2TB of total reads, 3TB of total writes since last restart).
I'd definitely look into whether your spindles are dedicated or not - if they're not, your measurements are necessary but insufficient. I'd also check into all the usual suspects first - wait types, index fragmentation/split IOs, filesystem (NTFS) fragmentation, etc.
October 11, 2013 at 4:04 pm
Nadrek (10/11/2013)
Gail and Jeff hit the nail on the head in the first two replies.With the benefit of all the posts ahead of mine, I suspect your actual question is closer to "My IOStall for writes is much slower than for reads; what can I do to improve performance".
I can say that RAID5 parity with a modern controller is extremely unlikely to be the cause of the slowness; evidence from a TempDB (one data file, one log file, both on the same RAID5, with 6 dedicated SSD's) is that even with signficant usage, average IOStall for reads is 2ms, and average IOStall for writes is also 2ms (2TB of total reads, 3TB of total writes since last restart).
I'd definitely look into whether your spindles are dedicated or not - if they're not, your measurements are necessary but insufficient. I'd also check into all the usual suspects first - wait types, index fragmentation/split IOs, filesystem (NTFS) fragmentation, etc.
Everything else is fine, no "panic" wait types, no index fragmentation (job for that), NTFS no problem either (64K alloc units) and occasionally defrag....
It's easy to have RAID5 with 2ms with SSDs... Heck, I have 2ms without RAID5 on a single SSD...
These are 15K discs and a very intensive write software on tempdb... other databases have 20ms to 5ms, but tempdb is getting killed...
I'll just try adding a single SSD disk and move tempdb there to see the new stalls..
Pedro
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply