February 20, 2005 at 11:44 am
OK, I cheated a little and used the same editorial for this one and Database Daily. I just liked it 🙂
I was reading a couple weeks ago about how grid computing and licensing are at odds right now. In general the current software licenses would require you to license all machines in your grid, even when you aren't using them at capacity. In particular, the article seems to take aim at Oracle's "no change in licensing" policy with the advent of grid computing. So if you want a 50 node grid, you'd need 50 licenses for the db. Even if you only use 10 nodes most of the time and only need the other 40 at peak times, like end of quarter. (any cheers here?)
I'm somewhat surprised that IBM is mentioned here in the software licensing model, though I have to admit I'm not sure how that works. I do know that when I worked for Peoplesoft, we bought some IBM P690 series machines specifically for the capacity on demand feature. I think we purchased a 690 with 24 CPUs and 56GB of RAM. However the box actually had 32CPUs and 96GB of RAM. Think of this as a big PC that runs virtual machines. In the version of AIX we ran, we could add or remove CPU's from one of our "servers" (virtualized from the hardware) in real time. Same for memory. Where this was really useful was in the end of quarter time periods where a heavy load in entering data as well as running reports for financials occurred. We could bump our 18CPU database server with 32GB or RAM to 22CPUs and 48GB of RAM for a few days.
What's even more interesting about this scenario is that we wouldn't even have to shut down the other servers running on this machine. I think we had 3 setup at any one time for various purposes. By including the extra CPUs in the machine, we could "rent" them from IBM for a weekend or a week along with some extra memory and increase the capacity of our box for a short period of time. I think it was expected that we'd rent this stuff for 4-6 days each quarter or 20 some days a year. Given that the break even point for purchasing this extra hardware was around 90 days, we're getting almost 5 years worth of peak capacity at a reduced price.
Just imagine if you could buy an ES7000 with 4 CPUs for everyday use, at a reasonable price, of course, and then for those peak times, someone would show up with the extra 8, 16, or 24 CPUs within a few hours when you needed them. And you'd only spent a fraction of the cost you'd spend if you'd bought the hardware.
Now I'm sure Peoplesoft had some deal with IBM for the software side of this, AIX, DB/2, etc. But for the average company, I'm sure that IBM, Microsoft, and more could come up with some scheme for "renting" extra licenses for CPUs that makes sense. The current models requiring you to buy full licenses for mostly unused hardware is just ridiculous. It's short sighted and customer unfriendly. Just imagine if SQL Server could be locked down by CPU and you could "license" your Windows 2003 server and SQL Server to run on only 2 of your 4 CPUs. Then if you thought you needed the extras for a "peak" time, then they'd send you a "3 day key" for some amount. They could even go further and license memory as well. Need another 32GB or RAM, it's $29.99 a day!
It's the solution to Microsoft's licensing schemes moving forward! Rent "software" capacity on your excess hardware! And my "licensing fee" for the idea will be low, just $0.01 per renter 🙂
Of course, you might not need that processing power, seeing as how I'm sure none of you enter your time or expenses at the last minute 🙂
Steve Jones
February 21, 2005 at 7:28 am
I've been watching the grid concept with some interest and have been wondering how some of the licensing issues were going to addressed. You made some good points. The idea of 'renting' excess hardware as needed is interesting.
Another thing that is happening with grids and distributed computing is the renting of excess capacity to the research community. It's conceivable that an economic model could be built around selling excess capacity during the low utilization periods and during the high periods such as quarterly reporting, retaining the capacity for internal use. There are already companies trying to build a business around this concept.
What is really interesting about this model is that it would ultimately commoditize processing power, making it much like oil or electricity. Taken to the logical extreme, there would only be providers of processing power and consumers that rent the cpu cycles as needed. There is already a "movement" that wants a world wide grid. The security implications have so far created huge barriers to such a massive virtual machine (hmmmm ... SkyNet?), so it appear that grids for now will be limited to intranets. But it is a very interesting field.
Bob
SuccessWare Software
February 22, 2005 at 6:48 am
Very interesting article. When IBM came out with the capacity on demand I thought that was a great idea. I don't think IBM advertises well enough though. Oracle continues to pump GRID computing in 10g like it is the best thing since sliced bread but here I see it as extremely expensive. It is my understanding that you have to buy an Oracle license (which is extremely expensive) for all of these dual 1.1 gig Windows servers we here for example in order to utilize them for Grid computing. The license is more expensive than the entire server hardware ! They need to change the licensing model to be more like IBM in my opinion. We have alot of Windows servers with low CPU utilization but to license them for Oracle Grid would be more expensive than to just buy more CPUs in our P570 and use them all the time for the Oracle databases !
February 26, 2005 at 3:29 pm
IBM's on demand computing is not the same thing as a grid. On demand is when you buy a machine that, as Steve mentioned, actually has more capacity than you expect to use on a daily basis. When you do need the capacity, IBM will turn it on, for a price, and it ain't cheap. Depending on the server you buy and how much extra horsepower is available the break even point tends to run between 30-90 days. But that is only looking at hardware costs.
Both Oracle and IBM force you to license their products for however many CPU's you will ever want to use in the LPar (logical partition) on which you run their software. So, if you have 24 CPU's total, and you create two 8-CPU Lpars with 8 CPU's in reserve for "on demand" needs you need to license Oracle or DB2 for 16 CPU's (not cheap) even though you only use 8 on a day to day basis. This is one of the big reasons that we decided against having any on demand capacity and just sized our P690 to handle anticipated peak loads. By the time we figured in the software licensing, IBM's hoopla about "on demand" was more smoke than fire, at leas in our situation...
Oracle's grid computing is not much more than a facelift and a new paint job of their Parallel Server concept from the 8i and 9 days. It is a "shared disk" clustering solution that, despite the marketing claims does not scale well beyond 2 - 3 nodes in anything but a read only environment. Beyond that the communication between nodes to coordinate locks becomes a real, and insurmountable bottleneck.
This is why you haven't seen MS and IBM talking about Grid computing. They both have signed on to "shared nothing" clustering, and I don't see that changing anytime soon.
/*****************
If most people are not willing to see the difficulty, this is mainly because, consciously or unconsciously, they assume that it will be they who will settle these questions for the others, and because they are convinced of their own capacity to do this. -Friedrich August von Hayek
*****************/
February 27, 2005 at 9:30 am
IMHO, grid computing is at LEAST a few years away from prime time. First, as suggested by the above replies, every company seems to have a different idea on just what grid computing is. The licensing issue is huge (e.g. MS SQL Server 2005 Enterprise is 25K per cpu - imagine getting that past your CFO on a 16 CPU system:crazy.
But, again, IMHO, I see it as inevitable. The fact that probably 50% or more of hardware resources go unused because the lights go out at 5PM is an extravagance that the bean counters won't allow to survive. At some point, organizations will start finding ways to tap this unused pool of computational power. That will most likely be some form of virtual machines on top of a grid of hardware.
When they solve the database concurrency problems, then I'll really start to take notice.
Bob
SuccessWare Software
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply