April 3, 2013 at 9:36 pm
Comments posted to this topic are about the item Development, Operations, or Accounting
April 4, 2013 at 2:59 am
Steve Jones - SSC Editor (4/3/2013)
If I can't count on better services than I get in-house, I'm not sure there are many advantages to moving.
Well said that man.
April 4, 2013 at 3:29 am
I doubt there's going to be much debate on this editorial, Steve. Expect to be Warnocked.
Semper in excretia, suus solum profundum variat
April 4, 2013 at 6:18 am
Yeah, what he said.
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
April 4, 2013 at 8:02 am
Steve Jones - SSC Editor (4/3/2013)
If I can't count on better services than I get in-house, I'm not sure there are many advantages to moving.
For the sake of discussion, I'll play devil's pedantic advocate... "better services" and "many advantages" might be subjective. 🙂
You could buy hardware, pay licensing costs, additional backup time/storage, and a DBA salary to have in-house services. You might even be able to find a DBA that is 100% dedicated to keeping everything running ideally-optimal. I have no idea how to estimate the true TCO for this setup.
You could make almost all of this someone else's problem for a fraction of that cost. Again, I am not familiar with true costs of cloud services under real-world usage patterns - but let's call it somewhere between 10% and 50%. I understand the initial purchase is depreciated but the DBA salary is probably fixed or rising over years as is the amount of data to backup, etc.
What is the percentage of downtime from a cloud provider? 1%? 5%? Now it's a matter of expected utility (ok, so that's a game theory term for "bet" or "gamble") Is it worth 10x the cost to reduce the 1% event to 0% (or 2x the cost to reduce 5% to 0%) This also perhaps unrealistically assumes the in-house error percentage is 0% - the question might be: is it worth 10x the cost to reduce the 1% failure of big/impersonal cloud to the 0.5% failure rate of an extra careful person-I-know.
If the cost of the 1% event exceeds the difference in maintenance cost for the in-house solution, the choice is clear. I imagine the real cost of that 1% downtime is frayed nerves for our stakeholders, but that lost revenue for small-mid size business is less than the cost to guard against.
I know a DBA isn't supposed to roll dice; however, in many cases there needs to be a balance between acceptable risk and cost to mitigate those risks.
April 4, 2013 at 11:19 am
Mike Dougherty-384281 (4/4/2013)
... in many cases there needs to be a balance between acceptable risk and cost to mitigate those risks.
Nice!
Peace of mind is not worth spending half a million dollars to solve a problem that cost $50 that only happens once in a lifetime.
Not all gray hairs are Dinosaurs!
April 4, 2013 at 11:26 am
Mike Dougherty-384281 (4/4/2013)
Steve Jones - SSC Editor (4/3/2013)
If I can't count on better services than I get in-house, I'm not sure there are many advantages to moving.
For the sake of discussion, I'll play devil's pedantic advocate... "better services" and "many advantages" might be subjective. 🙂
...
Good points. You may be right on some of these, after all, I've had SSL certs expire in house because our admin wasn't paying attention.
I'm not sure how to guess, but if service/support/performance isn't better, or much better, I know I'd rather take my chances most of the time in house. Mostly because I can work to improve things.
April 4, 2013 at 8:47 pm
One of the things that's usually implied or even explicitly expressed about using cloud services, such as Azure, is that you no longer need the expense of a DBA. While it's true that you may think you might be able to get away without a DBA, you're going to have many of the same problems without a DBA as if you had your own systems and you just don't not know it yet.
For example, if you have some performance challenged code that uses a huge number of resources because someone simply used DISTINCT to get past an accidental Many-to-Many join, it won't cost you much on your own hardware. Do the same thing on Azure, and you're going to be paying the big bucks. A good DBA will not only find those problems, a good one will also know how to fix the code. A great one will prevent the problem from occuring in the first place by mandating code reviews/performance tests and putting a set of standards in place.
To give you a real live example of what I'm talking about, I just found and fixed a simple little bit of legacy code that consumed only 320ms of CPU time and only 66,000 reads for each run. Most would be very happy with that especially with the seemingly low resource usage. What no one else realized is that it runs 40,000 times in an 8 hour period. That's 12,800 CPU seconds (more than 3.5 hours) and 21.6 Tera-Bytes of logical I/O. I don't know how much those two items would cost according to current Azure pricing, but that's a waste even on "free" systems. I got it down so that it would run in 800 Micro Seconds and use only 4 reads per run and is scalable. The new 8 hour totals are only 320 CPU seconds (5.3 minutes) and 1.3 Giga-bytes of logical I/O. That's 39 times less CPU and more than 16,000 times less logical I/O. Another key point is that optimization was done AFTER 2 non-DBAs tried to optimize it and couldn't.
Don't kid yourself with the promise of super low TCO for cloud services because you think you don't need a DBA anymore. If you don't have a DBA, you're TCO may end up being a hell of a lot more than you thought and usually more than it should. This falls under the old but still very true saying of "Some people know the cost of everything... and the value of nothing."
Even with cloud services, like Azure, the value of a good DBA is inestimable but sure.
--Jeff Moden
Change is inevitable... Change for the better is not.
April 4, 2013 at 9:50 pm
Jeff Moden (4/4/2013)
For example, if you have some performance challenged code that uses a huge number of resources because someone simply used DISTINCT to get past an accidental Many-to-Many join, it won't cost you much on your own hardware.
I was moved from doing a DBA for a product that had multiple separate installations to a product that had every customer in a single silo. The development (and supposedly production) DBA's were on the left coast while the silo was on EST. I'm going to use the term DBA very loosely because of all the errors I found. One of the steps I did was to add in a free monitoring tool* just to see the high cost queries and other stuff.
I found an SP that was returning a list of 27K employees for 500+ individual facilities every time it was run. They were then evaluating the list in the web front end. The SP was running more than 5000 times per day. It was a quick SP, but totally wasted I/O. Adding a facility ID to the SP changed the results to about 25, and the I/O was significantly decreased.
So the development DBA needs to be watched. And tuning local or remote is always a problem.
* = Confio Ignite Free
----------------
Jim P.
A little bit of this and a little byte of that can cause bloatware.
April 5, 2013 at 2:00 am
Usualy the contingency mission-critical databases and applications running on company hardware is done at an alternate location, or part of the infrastructure is duplicated to avoid outages. There is no reason to assume this is no longer necessary when you are using a cloud service. However, most cloud service providers do not yet support a scenario where you pay a very small fee for the ability to use a service only as a back-up when disaster strikes. Currently your only option for cloud services that are part of the mission-cirtical infrastructure is to spread them over different providers. I can only assume that you do this already with your internet connection, using several lines from different providers with different ISP's. Has anyone been involved in such a scenario?
April 5, 2013 at 5:19 am
I think Mike and Miles are spot on, in many cases it's simply not cost effective to throw money at making a system that little bit more redundant, but at the same time you have to be aware of those limitations.
I think at times the fault lies with the perceived notion that cloud hosting is the obvious way everything is going, that it has no issues, and everything will be better in the cloud. The providers tend to prey on issues we see in our own setups, server crashes, hardware failures, time and money spent on backups etc, and tell us none of those things will be an issue if we move our data to them. Of course the reality isn't that straight forward.
In your office if someone fails to renew a domain or certificate you can if need be grab the bosses credit card and just get it sorted, when someone else's in control though you don't have that option, you just have to trust that they'll get things resolved quickly.
If you put your email into Office 365 then MS make sure it's backed up, great, except they only do DR backups to cover things if their servers die. If you delete some critical email from a mailbox and haven't setup your own backups then you're out of luck, MS don't offer any way to restore that data (according to their documentation).
To many people these things aren't an issue, but you need to be aware of them when you're choosing whether to move to the "cloud", and make sure management are aware of the limitations as well as the benefits.
I am surprised things like Azure don't offer more in terms of DR setups for cold / warm standbys, perhaps there's a reason or perhaps they're missing a trick. I'd love to clone some critical boxes into a 3rd party cloud setup, leave them suspended, and be able to fire them up if our own setups / DC's had issues, but the cost seems almost as much as if we were using them live.
April 5, 2013 at 6:58 am
Nothing can take the place of someone who takes it personally when the system sucks. It may be 'only a job', but programmer isn't what I do, it's who I am. Having people like us is what keeps the whole world from collapsing. The suits treat us like we're fungible, but we're NOT. Respect for the way we see ourselves is the prime attribute of the developer/dba.
April 5, 2013 at 7:09 am
Jim P. (4/4/2013)
Jeff Moden (4/4/2013)
For example, if you have some performance challenged code that uses a huge number of resources because someone simply used DISTINCT to get past an accidental Many-to-Many join, it won't cost you much on your own hardware.I was moved from doing a DBA for a product that had multiple separate installations to a product that had every customer in a single silo. The development (and supposedly production) DBA's were on the left coast while the silo was on EST. I'm going to use the term DBA very loosely because of all the errors I found. One of the steps I did was to add in a free monitoring tool* just to see the high cost queries and other stuff.
I found an SP that was returning a list of 27K employees for 500+ individual facilities every time it was run. They were then evaluating the list in the web front end. The SP was running more than 5000 times per day. It was a quick SP, but totally wasted I/O. Adding a facility ID to the SP changed the results to about 25, and the I/O was significantly decreased.
So the development DBA needs to be watched. And tuning local or remote is always a problem.
* = Confio Ignite Free
Now that's what I'm talking about. But, those don't sound like "Development DBAs". Those sound like developers that think they know SQL Server because their system used an ORM to talk to SQL Server. 😛 All of the true Development DBAs I know (which are mostly on this site, BTW), wouldn't have allowed such a thing to happen never mind having written such terrible code.
--Jeff Moden
Change is inevitable... Change for the better is not.
April 5, 2013 at 9:27 am
Keith Langmead (4/5/2013)I think at times the fault lies with the perceived notion that cloud hosting is the obvious way everything is going, that it has no issues, and everything will be better in the cloud. The providers tend to prey on issues we see in our own setups, server crashes, hardware failures, time and money spent on backups etc, and tell us none of those things will be an issue if we move our data to them. Of course the reality isn't that straight forward.
Very well said! Vendors continue to try and sell us the "silver bullet" that will solve it all. But none exists. We have to take into consideration with each decision about cloud issues, is this really the appropriate place for the data/application and/or the right use of resources to support and operate it? Everything should not be cloud. And the historic idea that all roads lead to Rome ignores the fact that all roads also lead away from Rome as well. Or there are things that should go there, and things that should not or should leave there.
Sorry for the ramble but you are spot on.
M.
Not all gray hairs are Dinosaurs!
April 8, 2013 at 7:13 am
Jeff Moden (4/5/2013)
Jim P. (4/4/2013)
Jeff Moden (4/4/2013)
For example, if you have some performance challenged code that uses a huge number of resources because someone simply used DISTINCT to get past an accidental Many-to-Many join, it won't cost you much on your own hardware.I was moved from doing a DBA for a product that had multiple separate installations to a product that had every customer in a single silo. The development (and supposedly production) DBA's were on the left coast while the silo was on EST. I'm going to use the term DBA very loosely because of all the errors I found. One of the steps I did was to add in a free monitoring tool* just to see the high cost queries and other stuff.
I found an SP that was returning a list of 27K employees for 500+ individual facilities every time it was run. They were then evaluating the list in the web front end. The SP was running more than 5000 times per day. It was a quick SP, but totally wasted I/O. Adding a facility ID to the SP changed the results to about 25, and the I/O was significantly decreased.
So the development DBA needs to be watched. And tuning local or remote is always a problem.
* = Confio Ignite Free
Now that's what I'm talking about. But, those don't sound like "Development DBAs". Those sound like developers that think they know SQL Server because their system used an ORM to talk to SQL Server. 😛 All of the true Development DBAs I know (which are mostly on this site, BTW), wouldn't have allowed such a thing to happen never mind having written such terrible code.
Hey, I'm a developer (not a DBA at all) and I wouldn't have done that. There comes a point when it is just poorly, if at all, thought out. I guess in the context of the editorial you cannot get rid of the experts whilst systems are complex or custom (more than just visually).
Gaz
-- Stop your grinnin' and drop your linen...they're everywhere!!!
Viewing 15 posts - 1 through 15 (of 15 total)
You must be logged in to reply to this topic. Login to reply