July 2, 2012 at 10:36 am
cdonlan 18448 (7/2/2012)
Convincing the powers-that-be to spend the necessary money can be a real challenge, but if you want something even more challenging try to convince them that with a little bit of process discipline they can achieve some degree of HA with little cost.
Well said. A little process discipline helps in a lot of ways
July 2, 2012 at 12:49 pm
Bob Cullen-434885 (7/2/2012)
The company I do most of my work for DO want the utopian 100% availability,.
I have often found that what a company needs and what they want aren't the same things...:-D
"Technology is a weird thing. It brings you great gifts with one hand, and it stabs you in the back with the other. ...:-D"
July 3, 2012 at 12:40 pm
This is something that has to be evaluated on an application by application basis. As part of the risk assessment process every application should be undergoing, assess the need for high availability. Order processing might be critical, while accounts receivable can be down for a week with no real impact. Credit card processing for retail operations is critical, the online office supply ordering system, not so much. Let the business customers decide how much risk they are willing to accept, and how long they think they can run without a given app.
July 5, 2012 at 7:29 am
benjamin.keebler (6/29/2012)
Noone wants to hear "Sorry, the Lottery is down" when there's a $100 million jackpot on the "line". 🙂
Any business with that kind of cash flow needs big up-time and can also afford to pay for it.
The probability of survival is inversely proportional to the angle of arrival.
July 30, 2012 at 2:16 pm
Since I work for a smaller company, they decided some data loss is worth it instead of spending $$ on HA. We have DR & translog backups to go back to, but all the way up the chain has said 1 days loss is acceptable. I know we prefer to had failover, but it is too expensive to have idle servers sitting around.
July 30, 2012 at 10:09 pm
We work in the mortgage industry. Any data loss or unavailability is simply not acceptable. We went whole hog. We have local clusters with nearly instant failover. We have duplicate equipment at a DR site a couple hundred miles away, a dedicated "pipe" between them, and we use SAN replication to keep things current pretty close to the nearest byte. If both local clusters fail, the DR site picks up in just a second or two. We also have a huge diesel generator at the local site and all power goes through a killer UPS. We didn't go cheap on back up drives or tape systems, either.
Of course, we didn't have 2012 available when all of this was setup (and we still haven't upgraded but that's another story) but even with 2012, if you have a "super" natural disaster that takes out the building, if you don't have offsite hardware somewhere (maybe even on the (ugh!) cloud), you're dead.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 6 posts - 16 through 20 (of 20 total)
You must be logged in to reply to this topic. Login to reply