September 17, 2007 at 11:48 am
we do periodic restores to dev/qa servers and it still doesn't keep people out. in this case it turned to be a last minute feature that no one knew about added to an application and someone who approved it without going through QA.
what happened was that we had a sql server with 400GB of data that was used by a few people for auditing some invoices. we supported them, they saved the company a lot of money but no one thought of it as critical. a dev pointed one of the main apps that everyone uses including the call center to this server and no one knew. not even the person supporting most of these apps. it was only by accident we found out after months of mysterious lockups in the application
for now our plan is to change the passwords of the sql user id's and not tell them to anyone. if anyone needs access to a prid sql server we are going to do it via windows groups. each server is going to have a R/O and R/W group and people will be assigned accordingly and have their SQL rights taken away if need be.
September 17, 2007 at 1:39 pm
I think everyone here is using the term "snapshot" in the colloquial sense (that is a backup of production) rather than anything else.
Steve G.
September 17, 2007 at 2:26 pm
Steve,
Maybe they are just using it in a colloquial sense, but there are a lot of people who would take the word "Snapshot" literally. So I decided just to stick my two cents in. @=)
On our system, the developers can't run any code in any system except Dev. The DBAs take the code up through Test, QC and finally Prod. And we do a weekly check on permissions anyway, to make sure nothing has been granted that shouldn't be granted.
There are still some holes where something could get snuck through, but we run a fairly tight ship and simply don't allow anyone to do anything on Production in the back end save the DBA team. And everyone else has to go through the client interface. Since the DBA team is so small, it's still pretty easy to police who did what when...
September 17, 2007 at 6:34 pm
In our case, the term "clone" would be more appropriate. Not sure how it works, but it just takes a couple of minutes everynight on a tera-byte database. Has something to do with the disk system we bought.
--Jeff Moden
Change is inevitable... Change for the better is not.
September 18, 2007 at 9:54 am
I think this has been a problem in every IT shop since the earth cooled. If it weren't for the darn coders (and users), our servers would run like tops all the time.
Not to sound like a manager or anything but there are some "non-technical" alternatives to solving this type of problem...namely accountability. If coders are causing the apps to die...call them out on it. Make sure that they are the ones getting the call at 03:00 to fix the problem. If you can't identify which individual caused the problem, call their boss (at 03:05). You'll be amazed at how fast problems like that can get fixed.
In a small shop it's important to "be a team player" but people need to also take responsibility for their own actions.
OK, I'm off my high-horse and back to being a techie again.
September 18, 2007 at 8:30 pm
Heh... no such problems if you don't allow the coders to promote their own code and the DBA actually does his/her job...
--Jeff Moden
Change is inevitable... Change for the better is not.
September 18, 2007 at 9:32 pm
One quick comment on taking a snapshot of the production database for use on dev/qa: Since the developers (and maybe DBA's) are in the middle of making modifications to their code and potentially to the database as well a daily or weekly snapshot of the production database can be pretty disruptive to their work (e.g. overwrite that stored procedure that was just modified) and cause all kinds of grief for test cases (e.g. yesterday we got 100 rows back when we ran that query today it's 103?). One method I've used is to take a copy/snapshot/backup of the production database back down onto both QA and Dev environment immediately after each production release - assuming you're on a reasonable release schedule this works pretty well. The developers have all the benefit of working against a fairly reasonable facsimile of the production database without needing access to the production database.
There are a couple of caveats in the copy prod to Dev/QA method, depending on your environment and depending on data policies, etc. you may need to "sanitize" the production database (e.g. scramble SSN's or personal information which is easy enough to script) on QA/DEV prior to handing it over to your developers. There are some other gotchas particularly with SQL 2005 and the advent of encryption/certificates, etc. but it can be done.
One other thought, in the tightest environment I've ever seen, the developers/DBA's were required to script all the changes to the database (and build installers for all their code) which where then applied to QA (left over from previous release), testing was done, etc. Last step before go live was to "wipe" QA, bring in fresh copies of production databases, etc. then have a 2nd install of the application & database changes to QA followed by a 2nd run through by the QA group... if the 2nd install to the virgin QA environment had any issues the release was kicked back, QA wiped again and another run through scheduled. It was not good to be the guy/gal who messed it up the first time and even worse to be the one the 2nd time around.
Everybody is probably saying "Geez, that sounds really hard! (or really stupid)", but the technologies to do it with a minimum of pain are available in many shops (e.g. virtualization, SAN/NAS w/ image copy/snapshots, blade servers, etc.).
Joe
September 19, 2007 at 3:58 am
Actually, that's almost what we do in our shop. The main difference is that we refresh Dev / Test / QC right before we put in (or start) a release cycle. We do the restore in the middle of the night so we don't disrupt anyone and if the developers have code in Dev that needs to be re-run, so be it. They have to re-run their scripts.
The great thing about this method is if they "fixed" something in their dev code but forgot to script out the fix, they'll know as soon as they try and run something. It's happened before. Sometimes they'll leave an ALTER script in their code when they meant to change it back to a CREATE, sometimes they'll change a value in a table and forget to add the data update to their scripts. That sort of thing.
Constant refreshes of the database in question (not production of course), help to minimize problems because we've shaken out all the "forgotten" stuff before the code ever makes it to production.
September 20, 2007 at 3:26 am
Start by sending all developers on a compulsory ITIL Foundation course with the requirement that they pass the ISEB Foundation Certificate in IT Service Management.
It will teach them about change management and change control.
While they're away, implement it.
September 20, 2007 at 6:57 am
Heh... You beat me to it!
And, if you can't afford that kind of schooling for the company, at least read about change management and change control. Use Access or something to build a ticketing/tracking system for the defects and the fixes. Learn to promote "tickets", not code stand-alone code...
--Jeff Moden
Change is inevitable... Change for the better is not.
September 20, 2007 at 8:39 am
And, if you work for a company whose stock trades on a US stock exchange, you better have comprehensive change management policies in place or the SOX police will be knocking on your door.
Our change management policies generally require a two week lead time for any change to a production database, other than data fixes. There are exceptions for low risk and emergency changes, but they attract attention from senior management that we usually want to avoid.
September 22, 2007 at 10:21 pm
If you're using windows authentication, then have the domain admins create a windows group for the developers and grant that access to SQL.
If you're using sql authentication, you'll have to give them individual logins.
Create a database role that has the following: db_datareader (db role) view definition on the dbo schema (or any other schemas that contain objects they need to see the code for)
If you're paranoid, add in deny create, alter, drop on the schemas and deny_datawriter
Add that role to either the windows group or the individual logins.
Done.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
September 23, 2007 at 8:59 am
If you're using windows authentication, then have the domain admins create a windows group for the developers and grant that access to SQL.
Yeah... just not for access to the Production box 😉
--Jeff Moden
Change is inevitable... Change for the better is not.
September 23, 2007 at 9:27 am
Jeff Moden (23/09/2007)
Yeah... just not for access to the Production box 😉
Personally, I don't have a problem with some developers having very limited access to the production dbs. Key phrase there thought is 'very limited'. At most, read permission on a few necessary tables and view definition on the schema.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
September 23, 2007 at 9:38 am
Now... THAT I agree with! 😀
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 16 through 30 (of 37 total)
You must be logged in to reply to this topic. Login to reply