May 5, 2009 at 11:03 pm
There have been reports of significant performance degradation over time with SSD drives, especially when there is heavy write activity. The theory is that this is due to internal fragmentation of the files on the drive by the wear leveling algorithms in the drive firmware. This may be worth testing for yourself before making a full commitment to SSD in your enterprise. This article details the authors findings using a laptop SSD drive http://www.pcper.com/article.php?aid=669&type=expert&pid=1
May 6, 2009 at 1:03 am
Hi,
I think that the degradation issue is one that mainly affects mainstream SSDs and not the enterprise versions.
From what I've read, file fragmentation will not slow down an SSD at all, as the latency for finding the fragmentation just isn't there. The latency for writes has to do with how SSDs perform writes. The cell it is writing to has to be emptied before it can be written to, so has two write operations per "real" write command.
I don't agree with Steve's opinion that it will take quite a few years before SSD catches up with HDD (both capacity and deployment). SSDs are already available at 512GB and all the SSD companies are putting in the hours to get and keep pole position on both performance and capacity.
I reckon that we'll see the first 1TB drives sometime next year (if not this year) and that by 2011 you'll see SSD taking huge chunks out of the HDD market. The savings in energy and I/O per $ are just too attractive.
I still want a FusionIo though, those things make SSDs look like normal HDDs!
Regards,
WilliamD
May 6, 2009 at 1:23 am
So long as it's not just another Flash in the pan 😉
Sorry, but someone had to state the obvious..... I'll get my coat.
Semper in excretia, suus solum profundum variat
May 6, 2009 at 3:00 am
The very fact that these things come with 'wear leveling algorithms' is surely a very large hint that Flash memory at any price suffers from degradation over time. Every report I have ever seen says that this is limited to WRITE operations, while READs remain basically infinite, therefore Flash should never be used for tempdb, only for databases that are effectively read-only.
May 6, 2009 at 4:19 am
Actually, 1 TB size flash drives are not quite a few years away. Texas memory systems currently sell a product called RamSan 500, that is up to 2 TB of flash based SSD with 64 GB of RAM based cache. My company runs its database of this drive as well as two of the faster, all RAM based, 128 GB RamSan 400s and our experience of those is great.
May 6, 2009 at 6:45 am
The article that Steve links to is very interesting (Thanks, Steve).
I think that flash hard drives offer a solution to a problem that's been dogging desktop and laptop computers for a long time - boot up speed. Some have suggested that the time spent in boot up could be greatly reduced if we could come up with a better way to make the operating system available. Early suggestions have include putting simple versions of the OS in ROM. Most users want to boot up and immediately do something like check email. Using this paradigm the user gets a quick boot up and checks email while the rest of the OS is loaded into memory in the traditional way.
ROM sounds good at first, but it's hampered by the need to occasionally update the OS. Can you imagine the losses that would be incurred if a major manufacturer released a machine with OS in ROM and someone discovered a security or virus vulnerability? You'd have to replace hardware all over the place. Answer: keep a simplified version of the OS separated off in flash. Updates can be sent out whenever needed. This takes advantage of the fast read times of flash while minimizing the downside - degradation by too much writing.
___________________________________________________
“Politicians are like diapers. They both need changing regularly and for the same reason.”
May 6, 2009 at 8:08 am
I don't think it's all that many years away. Prices are coming down like crazy, and capacity and speed and reliability are way up every year.
I wouldn't be surprised if it's less than five years, probably more in the 2011 area, where we start seeing SSDs as a real alternative for small and medium business servers. Might even be next year.
I've been watching flash/SSD technology since the 90s, and it's accelerating faster than HDD technology, and has been for a while. Not too much more catching up to do.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
May 6, 2009 at 9:06 am
I might be wrong, in fact I'd expect I am. I know Texas Memory and Fusion are pushing hard, and growing their products.
I have no idea who to believe in terms of degradation. A couple of friends have these in enterprise servers, even in tempdb, and have had them for months, some close to a year, and no issues. no degradation they see and they say they've been looking.
There are some laptop reviews that have shown degradation, but is that the technology or that particular drive? Could be either one. Or both.
I'm not sold that this will adopt that quickly. The same was said about VMs, 64bit, and other technologies, and while adoption is growing, I'm not sure it's anywhere close to a large scale deployment.
May 6, 2009 at 9:14 am
I'm sure this technology will continue to evolve, and eventually we'll have 1TB sized flash memory drives that live in our servers, and possibly even laptops or phones. That's quite a few years away....
I think the option to have 1TB SDDs in servers will be real within just a few years, possibly as early as next year. By "real", I mean cost-effective for small/medium businesses, as opposed to specialized industries like EVE Online.
That won't mean immediate, wide-scale adoption, but I think it'll be faster and more wide-scale than might be expected, so long as prices and capacities continue on their current trends.
As far as reliability goes, backup to high-capacity, low-cost platters (standard HDDs) is probably a very viable option. Possibly, the pattern will be SDDs as primary and HDDs as backup, instead of HDDs as primary and tape for backup.
Actually, now that I think about it, that's a pretty close parallel.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
May 6, 2009 at 9:43 am
NAND Flash is typically guaranteed to have 1,000,000 erase/write cycles per block. This doesn't mean you can only write to them 1,000,000 times. They are organized to have several pages per block so the wear leveling algorithms try to spread the number of writes to evenly among all of the blocks. Using a SDD for something like tempdb that is very write intensive is about the quickest way to wear one out. Using a SDD for something like an operating system which is typically only read from and not written to often is the best way to avoid any wear issues.
The fact that there is no mechanical head to move around and no spinning platter to wait for greatly improves the seek time over a HDD so fragmentation is not an issue. In fact, defragmenting a SDD should be avoid to conserver the number of erase/write cycles and extend the life of the Flash.
May 6, 2009 at 9:57 am
I'm not sure I agree with Gus about next year, but I could be wrong. There's still a substantial premium for drives.
I bought a 1TB drive for US$147 yesterday. A 64GB SSD from Dell is $559 today. I see CDW selling an "Enterprise 64GB drive" for $2k
That's a lot of ground to make up.
However I do think Gus is right that at some point we'll see these used as the primary storage for many servers. I think with slowed refresh cycles (4-5 years instead of 3) and the economy, not to mention the need to evolve, I still think we're talking 8-10 years before it's in lots of SMBs.
May 6, 2009 at 10:09 am
Steve, we're actually agreeing on all points.
Judging by the price/capacity curve on SSDs over the last 10 years, I think viability for SMBs is probably 2-3 years off as a general replacement for HDDs. I'm saying it could be as early as next year, but that's more a hedge-your-bets kind of statement. After all, December of next year is a year and half off, and that's a full doubling cycling for solid-state circuitry, which means it's in the realm of possibility.
But, even with "available and realistic" in 2-3 years, that means it'll just trickle into systems at that point. It's not a sexy enough technology that it'll get any sort of immediate market percentage. That means real market penetration will pretty much be over the replacement cycle of servers and/or SANs and/or internal HDDs. So, 8-10 years for them to be common enough to matter to DBA careers is pretty real.
I think it might be faster than that, but I wouldn't want to put any real odds on it. I think it'll be at the lower end of those figures (closer to 2 years than 3 for viability and closer to 8 years than 10 for at least semi-ubiquity), but not much tighter than that.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
May 6, 2009 at 10:32 am
I think we agree, and you've stated it better than I did.
May 6, 2009 at 11:26 am
I came across this a couple months ago.
It's pretty cool and I wish I could afford something like it.
Viewing 15 posts - 1 through 14 (of 14 total)
You must be logged in to reply to this topic. Login to reply