February 11, 2016 at 2:51 pm
I'm thinking about setting up a pre-production instance, perhaps just Express Edition, for which to smoke test deployment scripts prior to deployment on production. So the scripts go from UAT -> Pre-Production -> Production. DDL triggers and auditing are useful for a DBA, because they can alert you immediately when an event occurs, but it would be nice to have this pre-production environment handy for occasional schema comparisons against production, which would indicate exactly what has been modified outside the official deployment process. Also scheduled backups on the instance would build up a history of point in time snapshots of the DDL and T-SQL code base. Since the pre-production instance is schema only, the backups will be very small and easily accessible.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
February 16, 2016 at 3:37 pm
Lot's goin' on here 😉
Discussion about where the "true" version of the code is
The "true" version of code is where you pull from when you are going to make changes. And when you are about to make changes, you want what should be running in Production, not necessarily what is running in Production. The code in source control should be more reliable because it should have gone through proper testing. What is in Production could have been changed by somebody woken up at 3:00 AM, made a change to get something working, and forgot to mention it. Or it could have been a change made by someone who didn't really care so much for process (details, details) and who left the company (nobody knows if they wanted to or were "asked" to resign), but not after making these types of changes in many places. A change could have been made years ago and was never noticed because it was a seldom-used code path. No matter how changes got introduced, they are out of process, out of "proper" testing ("proper" meaning not merely testing just the one scenario that is causing the error at the moment, but the 5 - 10 other variations as well). There is no good reason to inherently trust what is in Production over what is in source control, unless you don't have source control, in which case we just identified the main problem ;-).
And trust me, years ago, before using source control, I didn't really see what was so great about it, and it even seemed like it would get in the way of productivity. Now, after using it (i.e. VisualSourceSafe, Borland StarTeam, Subversion/SVN, Git) for almost 18 years, I can't imagine not using it. And no, it doesn't get in the way. There is a cost to anything, but it helps more than it hurts.
Steve Jones - SSC Editor (2/4/2016)
I'll still stand that allowing someone to modify code on the server is a fundamental design flaw. It's reality, but it's a problem....
However, if you make a change at 3am, all of you are assuming you'll do a good job.
EXACTLY! Way too much is being assumed in such a situation. It is far too easy to fix what you are intending to fix, and yet unintentionally break one or more other things, especially when in a panic and/or half-asleep. And probably things you wouldn't know to test. Or didn't have the correct variety of sample data to test with.
But yes, it is a reality. However, the circumstances usually dictate how far to stretch the ideal. If the system is entirely down, then making a quick fix in Production might be more helpful than harmful. But outside of that extreme situation, even a serious problem that needs immediate attention and wakes somebody up at 3:00 AM, if it requires a code change, then that also means waking up a QA person to test the change. Changes shouldn't be made without first being tested, if not also code reviewed.
Making changes directly to Production is essentially Russian Roulette: while the odds are in your favor, there is too great of a chance for failure. Sure, some people will take these risks for 40 years and never have a problem. Others will fail on their first try. But it doesn't take a whole lot of effort and process to lower the potential for failure as much as possible.
Andy Warren (2/2/2016)
I'm interested in the why. I like source control, use it all the time for code. Why isn't it used as naturally/often for the database? Is it really that hard? Or is it just because we're not taught to do it in the beginning?
I see it as a fundamental difference of environment: app code is self-contained (i.e. stateless) while most database changes inherently include state. If you want to replace the current version of a DLL with a new one, just put the new one in place (simplifying slightly). If you want to promote your DB changes to get the live DB up to a new "version", then you can't just wipe and replace. There are multiple levels of dependencies once you move beyond very simple databases: ever push out schema changes on tables with 10's of millions of rows AND are involved in replication and/or accessed across Linked Servers? Are Triggers considered schema or code? If you are adding a new column and the current Trigger is being updated to account for it, that needs proper coordination. Need to go back to a prior version? Just put in the old DLL. Roll-back the database changes? Well, that depends. Did you add / update / remove any data as part of the rollout?
Source control, when approached from the app-code point of view, is easy because the build server can just grab the latest version and it just needs to compile and follow whatever rules are in the .csproj (or equivalent) file. But source control for DB code doesn't necessarily provide a complete product to slap onto a server. So using source control isn't a requirement for getting the changes pushed out.
FunkyDexter (2/5/2016)
3. They usually required a change to be scripted. Really? I'm in SSMS with nice graphical designers and you would me to ignore all that gui goodness and write a query to add a column to a table?
Either write the script yourself, or at the very least use the SSMS's "script" button to generate a starting point. But even something as simple as adding a column can cause issues if not done correctly. Years ago (might not be the case now) I recall there being some simple changes that SSMS would load the data into a temp table, drop the current table, create a new table, and then load back in from the temp table. something convoluted like that. But even if that is improved, depending on what version of SQL Server you are using, adding a NOT NULL column used to always be an offline (i.e. blocking) operation until SQL Server 2012 introduced the ability to use a run-time constant DEFAULT value so that it could be an online operation. No DEFAULT being added? Back to square 1. When you have large tables and little to no maintenance window to push out lots of changes, a simple ALTER TABLE won't do. And if someone says "but dev doesn't have 5 million rows", then that still doesn't work because whatever script it used to get the change into dev should be the same way that the change gets propagated to any other environment. Obviously those who prefer to do a schema compare to generate a DIFF script won't agree, but for sufficiently complex projects, it is very important to not only test the changes that you are making, but also the mechanism by which those changes are being made (i.e. the rollout script and build process). This helps reduce complications that arise when pushing the changes to Production.
Concerns about SSDT
I could have misread, but it sounded like there were some concerns about using SSDT to skip proper testing. If that was the concern, then it is misplaced. SSDT, even if not well suited to anything beyond a moderate level of complexity, does have some nice features such as referenced object validation. And for those that like pushing out changes via schema comparisons, that is what it does. But that doesn't mean go from local dev to Production. You can deploy/publish to anywhere. It even has a built-in short-cut to publish changes to dev. But if you are using source control like you should, then it shouldn't be too much trouble to automate the publishing of committed changes to QA and UAT/Staging if you use the SqlPackage.exe utility, which can either publish changes immediately/directly or just generate the incremental build script. I'm not saying that I would use SSDT to manage a database project, but teams can be successful using it with source control and a proper SDLC.
Question about how to code and test hotfixes since dev and QA environments do not match Production
One of the last places I worked at had an extra environment set up for this specifically. It was a dedicated hotfix dev/qa environment that was always there. It had fewer servers and less test data than Staging, but every release was also pushed to this environment. They even distinguished between long-term dev (main release cycle) and short-term dev (hotfix). We used SVN and there were separate branches for short-term dev, both for app code and all DB stuff. There were app servers pointing to short-term dev and a build server (we used nAnt and CruiseControl) that watched the short-term dev branch in SVN.
So, upon each release, DB changes were pushed to the short-term dev DBs, app code + HTML + anything else was pushed to the short-term dev app servers, and the "release" branch in SVN was merged into the short-term branch so that once again, we had our "true" code in the proper place.
SQL# — https://SQLsharp.com/ ( SQLCLR library ofover 340 Functions and Procedures)
Sql Quantum Lift — https://SqlQuantumLift.com/ ( company )
Sql Quantum Leap — https://SqlQuantumLeap.com/ ( blog )
Info sites — Collations • Module Signing • SQLCLR
February 19, 2016 at 9:32 am
Working in a DW environment, my biggest challenge is coordinating databases, SSIS packages and SQL Agent Jobs. Most integrated VCSes do an admirable job on the database side, but seem to be somewhat lacking on the management of packages and jobs. Especially since they are tied together. A job runs some package that depends on a database. Updates to one element may have (usually do) updates to other elements.
I'm still looking for a VCS that brings those together. Then there are other things:
MDS, DQS, SSAS, MDW, ...
Gerald Britton, Pluralsight courses
February 19, 2016 at 9:51 am
g.britton (2/19/2016)
Working in a DW environment, my biggest challenge is coordinating databases, SSIS packages and SQL Agent Jobs. Most integrated VCSes do an admirable job on the database side, but seem to be somewhat lacking on the management of packages and jobs. Especially since they are tied together. A job runs some package that depends on a database. Updates to one element may have (usually do) updates to other elements.I'm still looking for a VCS that brings those together. Then there are other things:
MDS, DQS, SSAS, MDW, ...
Your CI process could include the deployments of packages and SQL Agent jobs, so at least you should be able to verify that things work harmoniously.
As for 'bringing them together', what do you have in mind?
The absence of evidence is not evidence of absence
- Martin Rees
The absence of consumable DDL, sample data and desired results is, however, evidence of the absence of my response
- Phil Parkin
February 19, 2016 at 10:40 am
While I can see jobs being scripted, I'm wondering if anyone has come up with good ways to put SSIS code into version control? I know you can put versions of it into VCS, which is better than nothing, but I don't feel like anything has been done to allow branching and merging. I could have just missed this along the way, but am open to ideas for branching and merging SSIS code if anyone knows some good methods.
February 19, 2016 at 10:42 am
Phil Parkin (2/19/2016)
g.britton (2/19/2016)
Working in a DW environment, my biggest challenge is coordinating databases, SSIS packages and SQL Agent Jobs. Most integrated VCSes do an admirable job on the database side, but seem to be somewhat lacking on the management of packages and jobs. Especially since they are tied together. A job runs some package that depends on a database. Updates to one element may have (usually do) updates to other elements.I'm still looking for a VCS that brings those together. Then there are other things:
MDS, DQS, SSAS, MDW, ...
Your CI process could include the deployments of packages and SQL Agent jobs, so at least you should be able to verify that things work harmoniously.
At the moment, that's totally manual on our part. I've been looking for a comprehensive tool to assist us in running a CI process that also covers jobs and packages.
Any recommendations? SSDT won't do it (or I don't know how). The BI tools are different from (and unaware of) the DB tools AFAICS. e.g. if you want write a package to run against a future version of a database, you can't reference a DACPAC (or I don't know how).
As for 'bringing them together', what do you have in mind?
For me, bringing them together means at least dependency checks between these elements, so I would be stopped from promoting a new job if the package wasn't ready and/or deploying an updated job automatically includes the packages it calls (and the packages those packages call, etc).
Gerald Britton, Pluralsight courses
February 19, 2016 at 10:55 am
Peter Schott (2/19/2016)
While I can see jobs being scripted, I'm wondering if anyone has come up with good ways to put SSIS code into version control? I know you can put versions of it into VCS, which is better than nothing, but I don't feel like anything has been done to allow branching and merging. I could have just missed this along the way, but am open to ideas for branching and merging SSIS code if anyone knows some good methods.
IMO, SSIS packages should be treated as executables (no merges). Branching of SSIS projects is fine, of course, but if you subsequently get merge conflicts in a single package because two devs have separately been working on it, the merge process becomes painfully manual.
The absence of evidence is not evidence of absence
- Martin Rees
The absence of consumable DDL, sample data and desired results is, however, evidence of the absence of my response
- Phil Parkin
February 19, 2016 at 11:00 am
g.britton (2/19/2016)
Phil Parkin (2/19/2016)
g.britton (2/19/2016)
Working in a DW environment, my biggest challenge is coordinating databases, SSIS packages and SQL Agent Jobs. Most integrated VCSes do an admirable job on the database side, but seem to be somewhat lacking on the management of packages and jobs. Especially since they are tied together. A job runs some package that depends on a database. Updates to one element may have (usually do) updates to other elements.I'm still looking for a VCS that brings those together. Then there are other things:
MDS, DQS, SSAS, MDW, ...
Your CI process could include the deployments of packages and SQL Agent jobs, so at least you should be able to verify that things work harmoniously.
At the moment, that's totally manual on our part. I've been looking for a comprehensive tool to assist us in running a CI process that also covers jobs and packages.
Any recommendations? SSDT won't do it (or I don't know how). The BI tools are different from (and unaware of) the DB tools AFAICS. e.g. if you want write a package to run against a future version of a database, you can't reference a DACPAC (or I don't know how).
As for 'bringing them together', what do you have in mind?
For me, bringing them together means at least dependency checks between these elements, so I would be stopped from promoting a new job if the package wasn't ready and/or deploying an updated job automatically includes the packages it calls (and the packages those packages call, etc).
TeamCity[/url] is an awesome tool to help you get your CI up and running. It's also free (up to a certain number of jobs).
Those dependency checks you mention would be fabulous. I know of no way of doing that. But the CI tool would at least help you find the inconsistencies because things would fail on your CI testing server.
The absence of evidence is not evidence of absence
- Martin Rees
The absence of consumable DDL, sample data and desired results is, however, evidence of the absence of my response
- Phil Parkin
February 19, 2016 at 11:17 am
Thanks for the link to TeamCity. I'll check it out later (Flash is blocked for us)
Gerald Britton, Pluralsight courses
February 25, 2016 at 4:58 pm
Thought provoking, thank you.
Viewing 10 posts - 91 through 99 (of 99 total)
You must be logged in to reply to this topic. Login to reply