January 31, 2019 at 2:19 pm
patrickmcginnis59 10839 - Thursday, January 31, 2019 2:08 PMI request permission to commence trolling. Please reply ASAP.
Bring it.
_______________________________________________________________
Need help? Help us help you.
Read the article at http://www.sqlservercentral.com/articles/Best+Practices/61537/ for best practices on asking questions.
Need to split a string? Try Jeff Modens splitter http://www.sqlservercentral.com/articles/Tally+Table/72993/.
Cross Tabs and Pivots, Part 1 – Converting Rows to Columns - http://www.sqlservercentral.com/articles/T-SQL/63681/
Cross Tabs and Pivots, Part 2 - Dynamic Cross Tabs - http://www.sqlservercentral.com/articles/Crosstab/65048/
Understanding and Using APPLY (Part 1) - http://www.sqlservercentral.com/articles/APPLY/69953/
Understanding and Using APPLY (Part 2) - http://www.sqlservercentral.com/articles/APPLY/69954/
January 31, 2019 at 2:30 pm
I have whacked tables and I've seen DBAs restore over or drop the wrong db and kill production. No one got fired, but there were tense moments.
This usually happens with ad hoc changes when someone is busy or distracted.
January 31, 2019 at 2:51 pm
Today I learned that you can execute UDFs. When I saw the code, I was wondering how they were able to execute a stored procedure inside a function, but later realized that those weren't procedures but scalar functions. It blew my mind. That said, I don't want to keep this code.
January 31, 2019 at 2:52 pm
Sean Lange - Thursday, January 31, 2019 2:19 PMpatrickmcginnis59 10839 - Thursday, January 31, 2019 2:08 PMI request permission to commence trolling. Please reply ASAP.Bring it.
INTRODUCING THE NEWEST COMPRESSION FROM REDMOND, MICROSOFT DROP@!@
eh I got nuthin, hate wasting good popcorn like that 🙁
January 31, 2019 at 2:59 pm
jasona.work - Thursday, January 31, 2019 1:46 PMTo be fair, how big of an impact this would have, obviously would depend on your system.
The systems I manage? 5 minutes lost? Did anyone notice, probably only because they got a warning from their front end...
Heck, a couple of the applications that use my servers for databases would keep chugging along doing their thing during the outage and writing the data when it's back up.Now, if you're talking some busy online retailer or the like, 5 minutes could be a couple hundred grand in lost revenue and becomes a big thing, but not, I would think, a RGE for the DBA (after all, the DBA wasn't the one that dropped the DBs)
And, really, for me, this wouldn't be a "nope, not going cloud, nope, nope nope" thing, it was a one-off (so far.)
Its probably more subjective, how it looks, 5 minutes times how many customers, and possibly that the dropping of the databases resulted from programming at a company who's focus is programming, that always gets folks chatting. Marketing will probably have something in mind about the estimated price of the reputation hit.
January 31, 2019 at 3:00 pm
I know! The damage done is probably just a DROP in the bucket!!!!
January 31, 2019 at 3:55 pm
patrickmcginnis59 10839 - Thursday, January 31, 2019 3:00 PMI know! The damage done is probably just a DROP in the bucket!!!!
That's a good one
January 31, 2019 at 7:15 pm
Grant Fritchey - Thursday, January 31, 2019 1:30 PMEd Wagner - Thursday, January 31, 2019 12:49 PMExactly. You hit the nail on the head. 😉Seriously? They lost five minutes on a few databases. I would be hard pressed to say the same about the servers I used to manage. I'm sure many of us would be. I know a lots of people are down on "the cloud" and yet put their data into hosted providers. What the heck is that but someone else's data center. Same thing.
Going into this stuff eyes wide open? Sure. Actively resisting just because? Nope.
They say only 5 minutes of transnational data was lost but what was the total time where the databases weren't available to the customers? If they had the time to respond to a call with "We're in the process of restoring the databases...", then I'm thinking that 5 minutes of data loss is just the start of the story.
--Jeff Moden
Change is inevitable... Change for the better is not.
January 31, 2019 at 11:11 pm
Jeff Moden - Thursday, January 31, 2019 7:15 PMGrant Fritchey - Thursday, January 31, 2019 1:30 PMEd Wagner - Thursday, January 31, 2019 12:49 PMExactly. You hit the nail on the head. 😉Seriously? They lost five minutes on a few databases. I would be hard pressed to say the same about the servers I used to manage. I'm sure many of us would be. I know a lots of people are down on "the cloud" and yet put their data into hosted providers. What the heck is that but someone else's data center. Same thing.
Going into this stuff eyes wide open? Sure. Actively resisting just because? Nope.
They say only 5 minutes of transnational data was lost but what was the total time where the databases weren't available to the customers? If they had the time to respond to a call with "We're in the process of restoring the databases...", then I'm thinking that 5 minutes of data loss is just the start of the story.
Agreed. And if they have a high volume DB, what is the financial value. In our case we process in excess of 400k financial transactions per minute. If it was just $1 per transaction, you can do the math ....
February 1, 2019 at 6:59 am
jasona.work - Thursday, January 31, 2019 1:46 PMGrant Fritchey - Thursday, January 31, 2019 1:30 PMSeriously? They lost five minutes on a few databases. I would be hard pressed to say the same about the servers I used to manage. I'm sure many of us would be. I know a lots of people are down on "the cloud" and yet put their data into hosted providers. What the heck is that but someone else's data center. Same thing.
Going into this stuff eyes wide open? Sure. Actively resisting just because? Nope.
To be fair, how big of an impact this would have, obviously would depend on your system.
The systems I manage? 5 minutes lost? Did anyone notice, probably only because they got a warning from their front end...
Heck, a couple of the applications that use my servers for databases would keep chugging along doing their thing during the outage and writing the data when it's back up.Now, if you're talking some busy online retailer or the like, 5 minutes could be a couple hundred grand in lost revenue and becomes a big thing, but not, I would think, a RGE for the DBA (after all, the DBA wasn't the one that dropped the DBs)
And, really, for me, this wouldn't be a "nope, not going cloud, nope, nope nope" thing, it was a one-off (so far.)
No real arguments at all. I'd just say that while, yes, this ain't good. I've seen databases dropped in production, SANs turned off, servers flooded, servers frozen (after being flooded), a gazillion instances of 'OH ****', and my favorite, everyone in the company has 'sa' privs. All of that stuff caused a lot more outages than five minutes. Overall, Azure (and hell, AWS) is amazing in how little it goes down. Between the architecture and the support, I think Microsoft on the whole is better at this than we are (and I mean the collective all of us, not you or I). And yet, I actively encourage people to have independent backups, to use the geo-replication (which would have covered their asses in this instance) and anything else on offer to add extra protection. Like Ronnie said, trust but verify. Belts and suspenders.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
February 1, 2019 at 7:13 am
Some people just don't get it :pinch:
😎
February 1, 2019 at 7:23 am
Grant Fritchey - Friday, February 1, 2019 6:59 AMNo real arguments at all. I'd just say that while, yes, this ain't good. I've seen databases dropped in production, SANs turned off, servers flooded, servers frozen (after being flooded), a gazillion instances of 'OH ****', and my favorite, everyone in the company has 'sa' privs. All of that stuff caused a lot more outages than five minutes. Overall, Azure (and hell, AWS) is amazing in how little it goes down. Between the architecture and the support, I think Microsoft on the whole is better at this than we are (and I mean the collective all of us, not you or I). And yet, I actively encourage people to have independent backups, to use the geo-replication (which would have covered their asses in this instance) and anything else on offer to add extra protection. Like Ronnie said, trust but verify. Belts and suspenders.
It's all WYPIWYG, would not want to run a high volume transactional system on an Azure SQL Database instance, when it comes to 30K+/Sec, one can loose a lot in seconds.
😎
If one has the potential loss of any great magnitude, then one must invest in the right infrastructure, simple due diligence!
February 1, 2019 at 7:46 am
Grant Fritchey - Friday, February 1, 2019 6:59 AMNo real arguments at all. I'd just say that while, yes, this ain't good. I've seen databases dropped in production, SANs turned off, servers flooded, servers frozen (after being flooded), a gazillion instances of 'OH ****', and my favorite, everyone in the company has 'sa' privs. All of that stuff caused a lot more outages than five minutes. Overall, Azure (and hell, AWS) is amazing in how little it goes down. Between the architecture and the support, I think Microsoft on the whole is better at this than we are (and I mean the collective all of us, not you or I). And yet, I actively encourage people to have independent backups, to use the geo-replication (which would have covered their asses in this instance) and anything else on offer to add extra protection. Like Ronnie said, trust but verify. Belts and suspenders.
Yeah, I've got a few of those t-shirts myself (once restored a QA backup over a production instance...)
But yes, the service itself doesn't go down all that often, I think what does make it much more news-worthy when it happens, is how many businesses get impacted at once. The back-end database goes down for Uncle Alberts' Auto Stop and Gunnery Shop, most people won't notice. But if it's on an Azure site and 20-30 other companies also go down? Yeah, that's going to get noticed...
February 1, 2019 at 9:15 am
How many times has significant data loss occurred because of infrastructure issues? Availability, definitely , but not not data loss. It seems as if every data loss issue I've had or seen was due to human error.
And, I classify a server crash, and not having any backup/recovery plan in place as a human error!
Often. Hardware crashes are infrastructure, right? Those result in data loss regularly. Maybe within a RPO, but most people aren't inside 5 minutes, either and likely lose more.
Michael L John
If you assassinate a DBA, would you pull a trigger?
To properly post on a forum:
http://www.sqlservercentral.com/articles/61537/
February 5, 2019 at 3:16 am
Eirikur Eiriksson - Friday, February 1, 2019 7:13 AM
It's better than what he was doing, which was within an SSIS package. I still wouldn't try to do it all in one statement though - too many different possibilities for bad data.
Viewing 15 posts - 63,076 through 63,090 (of 66,712 total)
You must be logged in to reply to this topic. Login to reply