January 15, 2016 at 4:11 pm
We're looking at one of those for tempdb. I believe tempdb is good with 8 30GB data files ( all same size with same autogrowth ). Each data file plus the log file are on individual Netapp LUNS.
Since the number of records is growing over time, we're thinking that taking the tempdb traffic off of the netapp might improve IO wait and network throughput for the user database data and log files.
With either solution, tempdb files would be on "drives" local to each node.
Right now tempdb is showing 7.3 ms average read wait and 1,400 ms average write wait ---- I suspect much of the write is update statistics with FULLSCAN at night.
January 15, 2016 at 9:45 pm
OK...And ...what is your question?
January 16, 2016 at 6:23 am
Any experience with either approach, good or bad? If so, which products? My CIO tells me the world of SSDs has changed since I last reviewed it and there are Enterprise-ready solutions now.
January 16, 2016 at 7:21 am
Indianrock (1/16/2016)
Any experience with either approach, good or bad? If so, which products? My CIO tells me the world of SSDs has changed since I last reviewed it and there are Enterprise-ready solutions now.
SSD's have definitely changed how enterprise storage system works, there is much price difference between SSD and spinning disks. We use FusionIO cards ( fastest SSD that you can find) which gives sub micro seconds latencies with 250 M IOPS. We all use all flash SAN for our databases, depending on how much your budget is. However do you have enough data to prove that slow disk is your issue? Just throwing SSD might not solve your performance issue.
January 16, 2016 at 7:42 am
Indianrock (1/15/2016)
We're looking at one of those for tempdb. I believe tempdb is good with 8 30GB data files ( all same size with same autogrowth ). Each data file plus the log file are on individual Netapp LUNS.Since the number of records is growing over time, we're thinking that taking the tempdb traffic off of the netapp might improve IO wait and network throughput for the user database data and log files.
With either solution, tempdb files would be on "drives" local to each node.
Right now tempdb is showing 7.3 ms average read wait and 1,400 ms average write wait ---- I suspect much of the write is update statistics with FULLSCAN at night.
I'm curious, please. How much money are you talking about for this?
--Jeff Moden
Change is inevitable... Change for the better is not.
January 16, 2016 at 7:49 am
What I do know is that our current web application is going to have to survive another few years while it's replacement is written. This will include adding new customers and millions of new records. The three main client databases occupy over 5TB now with tempdb at about 250GB. Money will be spent but I'm not privy to the budget.
Memory on each of the current two cluster nodes is 512GB. My CIO's most recent response to me on this:
"Things have changed with SSDs. Consider that this wouldn’t be just some single consumer grade SSDs, but enterprise grade devices in RAID(with hot spares, etc.) on a NetApp SAN that monitors the health constantly.
We can put 1.5TB of RAM into the new HP DL560 SQL servers we bought. I think that’s another thing we should explore – how much going from 512GB to 1TB or 1.5TB would help."
January 16, 2016 at 9:37 am
I can't help but think that adding more memory to the system and putting the log files on non-volatile SSDs would be a larger benefit, overall. Also, you might get somewhere between 2x and 10x (highly optimistic) improvement with SSDs but that pales in comparison to the very frequent 60x to more than 1000x improvements that can be realized when you fix/optimize "critical" code. That usually dramatically decreases the amount of activity in TempDB, as well. Even with a 5TB system, a 250GB TempDB seems to be a lot larger than I would have expected. Our current "money maker" system occupies about 1TB and TempDB has never needed more than 8GB even though I have it (them... I have 8 2GB files) set to 16GB.
Still, it sounds like your CIO has cast the die. I'd be very interested in the before'n'after performance measurements once you good folks have put TempDB onto SSDs.
--Jeff Moden
Change is inevitable... Change for the better is not.
January 16, 2016 at 9:53 am
Thanks for the feedback. Changes to the ORM-generated sql would no doubt make big difference, as would creating a data warehouse so "report-type" queries aren't run against the OLTP system ( only system currently ) during the business day. But, as is so often the case, massive hardware improvements are faster, and probably cheaper than major code changes.
January 16, 2016 at 12:41 pm
Indianrock (1/16/2016)
Thanks for the feedback. Changes to the ORM-generated sql would no doubt make big difference, as would creating a data warehouse so "report-type" queries aren't run against the OLTP system ( only system currently ) during the business day. But, as is so often the case, massive hardware improvements are faster, and probably cheaper than major code changes.
Be wary of "ORM Generated SQL" because the "big difference" may be in the opposite direction of anything having to do with performance. I've found that it generates some of the worst performing code ever especially when it comes to correctly identifying column data types. I've even seen them convert an NVARCHAR(1) to an ASCII value using the ASCII() function for a CHAR(1) column killing any chance of an index SEEK. I've seen it return dozens of columns in nested selects to return just a single scalar value with implicit column conversions and some of the worst practices imaginable. It takes someone as concerned about performance using an ORM as it does with someone writing effective SQL to make it work correctly. Shifting to an ORM isn't the panacea of development that a whole lot of people would have you believe. I've even seen it where basic C.R.U.D. is terrible and I'm not just talking about 1 ORM in particular.
I know that very few people want to hear it but all these hardware tricks and special software don't work as well as people would expect unless they go massively parallel, which usually only gets a max of 30x (an optimistic max, at that) and that's with the understanding that you have to do a rewrite anyway to accommodate the new hardware.
It also doesn't take "major code changes" to get some pretty massive improvements. If you were to find and resolve your top 10-20 problems in each category of CPU time, Reads, and Duration (most of the duration problems vanish when the other two do), you'd likely find out that you did much better than any of the hardware improvements could/did.
I've also seen it where supposedly faster hardware has actually made the bad code much worse because of increased contention. Instead of getting a 2x-10x improvement, you could actually see no improvement or further degradation of performance (and, no, not speculating here... I've actually witnessed it happening at two companies).
If you have someone good at it, finding the bad code and fixing it isn't as expensive as most would think and has the long term benefit of being rather bullet proof when it comes to performance and scalability.
--Jeff Moden
Change is inevitable... Change for the better is not.
January 16, 2016 at 1:35 pm
Clarification: the current system has been C# .NET ORM for years -- so it was changes to code, not a move to ORM. I suspect development and management will want to extend the system's life while writing its replacement ( without much investment in refactoring the current code ).
The original developers ( talented bunch ) are mostly gone so maintenance is handled by an offshore dev / QA team with more limited abilities.
January 16, 2016 at 2:42 pm
Indianrock (1/16/2016)
Clarification: the current system has been C# .NET ORM for years -- so it was changes to code, not a move to ORM. I suspect development and management will want to extend the system's life while writing its replacement ( without much investment in refactoring the current code ).The original developers ( talented bunch ) are mostly gone so maintenance is handled by an offshore dev / QA team with more limited abilities.
Doesn't matter if it's a "change to" or "currently is", the problem is with the code, especially if it's ORM code except for the most incredibly simple C.R.U.D. code and, even then, no guarantees there because of the problems I previously spoke of..
Based on the idea of "without much investment in refactoring the current code", I truly hope the additions to the hardware helps as much as they think it will but, again, I suspect it won't. Let us know how it works out.
--Jeff Moden
Change is inevitable... Change for the better is not.
January 16, 2016 at 4:24 pm
I believe it was Wayne Sheffield who said "An ORM is the fastest way to slow down a database."
Then again, it may be done well, with the ORM calling stored procedures where the set-based code is implemented.
January 17, 2016 at 3:59 pm
Are you running SQL 2012 SP3? there are significant performance improvements to tempdb in CU since SP2 was released.
This is a free solution that is worth testing to see if it helps your problem.
January 17, 2016 at 4:30 pm
Updating to SP2 CU 5 next weekend.
January 17, 2016 at 4:42 pm
Should help. I think the Tempdb fixes were in SP2 CU1 or 2.
http://www.brentozar.com/archive/2014/04/memory-ssd-tempdb-temp-table-sql-2014/
Viewing 15 posts - 1 through 15 (of 16 total)
You must be logged in to reply to this topic. Login to reply