October 28, 2012 at 11:56 am
Greetings to all --
I have a brand new DELL 720 server due to go into production in a few weeks. This weekend, I migrated all databases to the new server, and am running some tests. The drive layout mimics the old, but the new server is more powerful and has more resources.
Original Server:
SQL Server 2008 R2 Enterprise RTM running on Windows 2008 R2 Enterprise SP1
Dell 620
32GB RAM
28GB allocated to SQL
all data files, logs and tempdb reside on the same SSD drive (did I just hear you groan?)
not sure about the caching for the RAID controller, but I have a call into the data center to verify this
Power options - set to High Performance
New Server:
SQL Server 2008 R2 Standard SP2 running on Windows 2008 R2 Enterprise SP1
128 GB RAM
64GB allocated to SQL Server
Dual Socket Six Core Intel Xeon E5-2640 2.5GHz
all data files, logs and tempdb reside on the same SSD drive
SSD is comprised of 4 SATA3 6Gb/s drives in RAID 5 with write-back caching enabled
Power options - set to High Performance
Crystal Disk Benchmark has much higher numbers all around vs. the original server
I did try to run CPU-Z on the new server, but it hung and I had to have the data center reboot it.
Percentage-wise, the CHECKDB time is much worse on the newer server, 23 minutes on old vs. 34 minutes on new. There is zero activity on this server at the moment, so there is nothing else competing for resources.
Here are the timings I'm getting:
4 TempDB files, equal size/growth
34:33 rebooted, 59GB free memory
I changed to a single TempDB file, and restarted the SQL service
34:39 single TempDB file
Anything I'm missing here?
Thanks for your help --
sqlnyc
November 2, 2012 at 5:45 am
Check out if you have a battery on your raid controller.
November 2, 2012 at 7:13 am
Check your IO throughput.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 2, 2012 at 9:47 am
Just a clarification. By SSD, I assume you mean SQL Server drive, but that's not the acronym we use. SSD is usually solid state drive these days.
However, if things are running slower, I'd check the IO system as Gail suggested.
November 2, 2012 at 11:05 am
I have seen that happen many times, and it was always due to poor IO performance.
You would think that newer would be faster, but that is often not the case. I have seen many brand new servers where the IO performance was horrible.
I usually do extensive IO benchmarking using SQLIO before I even install SQL Server. There are many articles about how to do this posted here and other places.
An important thing to be aware of with IO benchmarking is that you need to make your test files big enough to overcome the size of the RAID cache and to make sure you are seeing the true performance of the drives. Also, you need to test both reads and writes and random and sequential at various IO block sizes. Most IO testing software concentrates on 4K blocks, which is good for file servers, and defaults to small test files, like 100 MB. SQL Server does IO in much larger blocks, usually 64K. I am usually most interested in the performance of 64K reads and writes, especially random IO. I like to use very large test files, 20GB to 100 GB, especially if it is on a SAN with a large cache.
November 2, 2012 at 4:35 pm
Thanks to all who have replied.
Steve - by SSD I mean Solid State Drives.
I did ran SQLIO when the server was first built. But then I discovered that the Solid State Drives that are used for the RAID array had the wrong interface (3Gb/s instead of 6/Gb/s).
So they built a new RAID array with new drives, and I tried to run SQLIO again. But even a minimal SQLIO test - using a 20GB file - creates a test file that is ever-growing, and completely fills the drive.
Anyone seen this behavior before?
Thanks again to everyone.
sqlnyc
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply