Oops, I deleted that data

  • I like @"Jeff Moden"'s Rule 2:

    wrote:

    another auditable process we rigidly follow is that NO ONE IS ALLOWED TO DEPLOY THEIR OWN CODE TO PRODUCTION

    I used to hate having DBA's deploy reports I designed on the report server as they had all sorts of settings and default parameter values that needed entering. Entering them by hand was the usual process back then. I would always send deployment reports along with a long list if instructions. The DBA's hated going through this (understandably) and there were always a couple of errors made. When a more senior DBA came along, he said to me that he never had to carry out deployments by hand - any deployment passed on to him was always performed via the accompanying script.

    Not deploying your own code forces you to think about all the things you have to take care will not go wrong when someone else must deploy your code. That person may not have any idea of what your code does and how it should be deployed and does not need to. Not only it forces you to think, it makes you take steps to ensure bad things will not happen.

  • Jeff Moden wrote:

    NO ONE IS ALLOWED TO DEPLOY THEIR OWN CODE TO PRODUCTION, PERIOD!  Except for the addition of indexes, that includes me... the DBA.

    when I worked as a test consultant for database performance we used to call this "marking your own homework" - never ever deploy unless somebody has sanity checked it - and even then let the other guy press F5

    MVDBA

  • When your 500 GB database would use multiple filegroups (and the bigger tables are ideally partitioned into different FGs too), then you would be able to do a partial restore for only the single filegroup where the faulty table sits.

    You would be not able to read from tables in other filegroups and it will restore always the [PRIMARY] filegroup (for this reason NO user table / index should resists there), but instead of waiting for 4 h you would have been done in a few minutes (when the table is not on the biggest filegroup). Another benefit would be, that you could place the database files on different disks (slow HDDs for the cold data, fast SSD for the hot ones)

    God is real, unless declared integer.

  • Well that is mostly usual when developers are having direct write access to  production databases specially the new joinees.

    Any sort of script executions should be happening through the DBA's. Even then the DBA's are not really sure about the effects of the script executions. With one of my clients we have suggested developers to use OUTPUT statement for INSERT, UPDATE and DELETE statements. They get logged into a table created with ticket number into a DB which will be cleaned periodically. There is little additional storage usage, however considering the business impact of "Oops, I deleted that data" i would take it.

    With this process, i always make sure there is an OUTPUT statement for  any data change statements.

  • First, if your developers have the ability to delete data from a production database, it's not a production database, it's a development database. There's no excuse for letting developers into production.

    Second, if you can afford to lose 30 minutes of data, fine, but your log backups should  run every minute. Yes, really.

    https://www.brentozar.com/archive/2014/02/back-transaction-logs-every-minute-yes-really/

     

  • jermitts 57867 wrote:

    First, if your developers have the ability to delete data from a production database, it's not a production database, it's a development database. There's no excuse for letting developers into production.

    developers and 1st line responders do need to occasionally touch live databases

    imagine  a customer has accidentally cancelled an order - a support ticket comes in to resurrect the order.  if we need to bring the order back to life then we have to manipulate data on live (never procs), we also need to do things like attach proof of the data change - No-one wants a line manager shouting at us for screwing up a million dollar order from our biggest client .

    Somebody always has to be able to change data on live and it's the dev team that understands the structure and implications of changing that data

    i would argue that it should be locked down until they put in a request for access and then restricted after the case is closed... but these are lofty ambitions at present

     

     

    MVDBA

  • Where a full backup is not suitable  / available for a particular script update, i sometimes take an approach to make a copy of some individual tables/ fields using bcp out to text files so that if necessary / for recovery , i can rebuild original table data if required.

  • Jeff Moden wrote:

    It took 40 people more than 10 days to get us back into business by reloading 3 weeks of files in the correct order, doing the processing on them, making a shedload of manual entries from source documents, etc, etc, and it was never 100% correct for about 6 months.

    My mouth was open in awe at this story.  I'm pretty sure you've mentioned it before (and I'm pretty sure my mouth was open that time too).

    One takeaway I had from your post is that you use tables in another DB and reference them in the main DB via synonym.  We have a web server that references our software DB in this manner and I always wondered if speed would be impacted by such a model (I've never tested it mainly because I'm not the developer) but it seems to be performing well all of these years.  After reading your post I find myself thinking, "It can't be too bad if Jeff Moden uses it!"  Any caveats you can think of for this model?  Keep in mind, we have a 5GB database and a very low amount of web traffic (we're a small member-based company, not a Fortune 500).

  • stelios21 wrote:

    I like @"Jeff Moden"'s Rule 2:

    wrote:

    another auditable process we rigidly follow is that NO ONE IS ALLOWED TO DEPLOY THEIR OWN CODE TO PRODUCTION

    I used to hate having DBA's deploy reports I designed on the report server as they had all sorts of settings and default parameter values that needed entering. Entering them by hand was the usual process back then. I would always send deployment reports along with a long list if instructions. The DBA's hated going through this (understandably) and there were always a couple of errors made. When a more senior DBA came along, he said to me that he never had to carry out deployments by hand - any deployment passed on to him was always performed via the accompanying script.

    Not deploying your own code forces you to think about all the things you have to take care will not go wrong when someone else must deploy your code. That person may not have any idea of what your code does and how it should be deployed and does not need to. Not only it forces you to think, it makes you take steps to ensure bad things will not happen.

    That's some awesome feedback.  Thank you.

    I'm right there with the DBA you talked about.  I won't deploy by hand.  It has to be in the script and the script has to follow all the rules.

    I also like your second paragraph... it's exactly the way I've always felt.  It's also a great teaching tool because the same types of things also work very well inside stored procedures.  The folks I work with have gotten extremely fast and good about doing this for both deployment scripts and within stored procedures.  Half of them are front-end Developers and I'd pit any and all of them up against a lot of Database Developers in a lot of other companies.  It's both a pleasure and an honor to work with them.

    As a bit of a sidebar, the Developers do peer reviews amongst themselves and I do 100% DBA/Safety/Performance/etc reviews on their code.  It has gotten to the point where there are only a couple of possible suggestions even on a 60 script release and there are even fewer changes.  They follow written specs for form, fit, function, readability and they actually take the time to cleanup legacy code (our predecessors left quite the mess).  They also brag about how the found and fixed or prevented performance issues in code and, sometimes, it's even their own code from a couple of years ago.  And, I "live" with them in the office.  There is no typical great divide between the Developers and the DBA.  I've turned down better paying jobs to stay with these guys.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • thisisfutile wrote:

    One takeaway I had from your post is that you use tables in another DB and reference them in the main DB via synonym.  We have a web server that references our software DB in this manner and I always wondered if speed would be impacted by such a model (I've never tested it mainly because I'm not the developer) but it seems to be performing well all of these years.  After reading your post I find myself thinking, "It can't be too bad if Jeff Moden uses it!"  Any caveats you can think of for this model?  Keep in mind, we have a 5GB database and a very low amount of web traffic (we're a small member-based company, not a Fortune 500).

    We use synonyms a lot between databases so we can keep the general functionality separate and give us lines of demarcation for all sorts of things including the isolation of large tables and similar things.  One caveat that we've run into in the past is similar to what we run into with views.  We don't generally use "Hungarian Notation" to identify different objects and so it sometimes takes people a second or two to figure out if something is a table, view, function, or synonym.  But it takes virtually no time to do such a thing.

    Another more important caveat is that you can't use DRI (FKs) between tables that live in different databases.  This hasn't been a problem for me because I've only split of the largest of tables and all of those are some form of audit or history or transaction table that has no FKs.  If I did have such a need, I'd simply make a copy of the usually very small PK table.  If you updated one of the copies, you'd have to update them all.  Extended properties on the tables are a good thing to remind folks and we do use them.  If that doesn't work for you, you could always make a "do nothing" function that has schema_binding on the table with a note in the flower box to remind people.

    An example of why we don't use any form of "Hungarian Notation" is that today's table could become tomorrows view or synonym and vice versa.  If you think not using "Hungarian Notation" is confusing, just wait until you run into a view with a "tbl" prefix or a synonym with a "tbl" prefix.

    There is another suggestion that I have that has also helped us a lot.  If you move a table to another database, always leave an empty copy of the table in the original database (of course, it will be differently named than the original because the synonym will use the original name to keep from having to change code).  In a pinch, that allows the database to be copied without having to copy the database with the large table in it.  This works particularly well for audit tables, which usually turn out to be the largest tables in most databases and are usually not essential if you need to "get back in business" after a massive crash or when you need to make a copy of the database for Development or Test purposes.  All you need to do is rebuild the synonym(s) to point at the local table(s).

    Also, I've not done specific performance testing for the differences between hitting a local table in the same database or a table in a different database on the same instance, but I've never found a difference during performance tests of related stored procedures affected by such a change.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden wrote:

    We use synonyms a lot between databases so we can keep the general functionality separate and give us lines of demarcation for all sorts of things including the isolation of large tables and similar things.  One caveat that we've run into in the past is similar to what we run into with views.  We don't generally use "Hungarian Notation" to identify different objects and so it sometimes takes people a second or two to figure out if something is a table, view, function, or synonym.  But it takes virtually no time to do such a thing.

    Our network team use DNS entries to mimic synonyms - a word to the wise - do not do it.

    I go through our code removing 4 part naming table references where the table is in the same database.. all it does is make a linked server query to itself.... bad

    The non DBA element in our business do not understand that

    select x.* from myserver.mydb.dbo.mytable x inner join mytable.y on ..... is a waste of roundtrip and distributed transactions

     

     

    MVDBA

  • This reply has been reported for inappropriate content.

     

     

    • This reply was modified 5 years, 1 month ago by  alex.sqldba.
  • MVDBA (Mike Vessey) wrote:

    Jeff Moden wrote:

    We use synonyms a lot between databases so we can keep the general functionality separate and give us lines of demarcation for all sorts of things including the isolation of large tables and similar things.  One caveat that we've run into in the past is similar to what we run into with views.  We don't generally use "Hungarian Notation" to identify different objects and so it sometimes takes people a second or two to figure out if something is a table, view, function, or synonym.  But it takes virtually no time to do such a thing.

    Our network team use DNS entries to mimic synonyms - a word to the wise - do not do it.

    I go through our code removing 4 part naming table references where the table is in the same database.. all it does is make a linked server query to itself.... bad

    The non DBA element in our business do not understand that

    select x.* from myserver.mydb.dbo.mytable x inner join mytable.y on ..... is a waste of roundtrip and distributed transactions

    Oh yah. Had this. Only to be told by the Lead Developer that its a 'best practice' and that 'I wouldn't understand'

    So, I made my case and then let it blow up. That was fun. And immature.

  • The ability for developers to delete data simply is a feature is mismanaged security. It has nothing to do with Dev-Ops.

    To @jeff Moden's point, there are absolutely times where live production data needs to be corrected. As long as you have a verifiable, audit-able process you have established ways to begin protecting yourself. There are a billion stories of the result of good intentions going awry. The question you need to pose is, how comfortable are you explaining the problem to the CEO and resting on "Well, he had good intentions."? If that idea does not result in you needing a change of underwear you might be in the wrong line of work.

    ------------
    Buy the ticket, take the ride. -- Hunter S. Thompson

  • We actually force our devs and even us DBAs to use a non-production copy of the database to test any data changes before moving them to production. In fact, most items have to be tested by another person (QA team, another Dev / DBA, or the business user) before they can move items out of non-prod to production and no Dev has permissions on Production to be able to run code like this.

    Not only does this prevent this scenario 99.9% of the time, but it ensures our backups are valid and working. Because of course we have to restore to a mostly current version of Production before we can test against recent data.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

Viewing 15 posts - 31 through 45 (of 77 total)

You must be logged in to reply to this topic. Login to reply