June 25, 2019 at 12:00 am
Comments posted to this topic are about the item All flash vs adaptive flash storage - which is right for my organisation?
June 25, 2019 at 1:40 pm
From the article:
Flash storage is known for enabling significant improvements in data processing operation speeds, allowing multi-terabyte databases to be stored "in-memory", with a read/write speed that is four times greater than HDD.
It's funny how good people think that is. It's actually not that good. When we moved from an older machine to a new one and we also moved from spinning disks on the SAN to a full-up SSD SAN, we only saw a 2X performance improvement and only on a few batch jobs. In most cases, there was no improvement.
I wasn't disappointed... it was actually better than I expected because I actually expected to see no improvement on anything. Everyone else was thinking that it was going to make a lot larger difference.
The problem is that code is what it is. It's either good code or it's not and, if it's not, things like SSDs aren't going to make much of a difference.
Heh... oh yeah. I've seen people do handsprings because some of their 8-10 hour jobs now run in 4-5 hours. Not really a "big" win from what I can see, though... especially since ever increasing scale will eat that meager gain up in a whole lot less time that people expect.
Don't get me wrong... I'll never turn down better hardware (especially network hardware)... but that's not where true performance gains are to be had. Even moving to MPP appliances will only give you up to 30X and that requires a fair bit of code rewriting to be able to use MPP.
If you really want performance of your code, there's really only one place to find it... in the code. 2-4 times and even 30 times faster is a paltry gain compared to what you can usually do in the code even on some relatively bad databases designs.
--Jeff Moden
Change is inevitable... Change for the better is not.
June 26, 2019 at 3:55 pm
I agree with Jeff that the improvements of adaptive Flash storage (AFS?) and AFA (all-flash array) might be highly overrated, if the application design plain and simple sucks, what's your 5 TB Flash Cache gonna do when it hits a single 10 TB - mostly nvarchar(max) - Table? It's simply gonna crawl to death. So usually it's not just the application but the rightsizing of AFS especially usually get's done wrong because the planning isn't done by someone knowing the hard hitting SQL DB's but someone looking at Sales pptx.
Before I would ever consider going with AFS I'd roll out a few Optane Flash Cards and place TempDB on it, which is usually the first choke point before transaction logs get to saturate anything. If that still doesn't help I'd still rather consider a small AFA like the size you need for your TempDBs and Transaction Logs just for that rather than AFS. I've seen AFS Arrays break down performance wise so hard on SQL Servers that it begged questioning why was the move from HDD Array to AFS Array made at all.
And things don't end at rightsizing your caching for AFS, what are your interconnects to the storage doing, throughput and latency wise ... […]
Best example of today in regards of performance: Old SSIS 2012 Package with SQL Server Authentication was simply upgraded to SSIS 2017 and Windows Authentication (because yes, I fixed the SPNs and Delegation) and the package ran instead of 2:20h just 1:21h and this just boils down to "do I have a nice AD Token to throw at xy for authentication or do I need to grab those credentials and login?"
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply