November 25, 2008 at 1:58 pm
I'm not sure if this is the correct forum for this but here goes.
I am interested in opinions regarding the completeness of our methodology managing db changes during development.
Methodology: SCRUM with 3 week sprints
Tools: Visual Studio 2008, TFS, Redgate Tools, SQL 2005/2008
(Each developer has all of these tools installed locally)
From a db standpoint, we have 5 environments setup not counting production.
1) Sandbox - Used for nightly builds of the application
2) SmokeTest - Used when moving a release to QA
3) QA1 - This is the current QA environment
4) QA2 - This is the previous QA environment. When the app is rolled to QA, we keep the previous release available
5) UAT - We have a group of current clients that serve as an advisory panel that access this
All scripts are stored in TFS and in Database projects in Visual Studio.
Each developer can run local copies of the databases and develop and make changes as needed. When required, they can check in the scripts into the appropriate TFS project under folders defined for either sprint patches or release patches. Sprint patches are changes to objects related to the current sprint. Release patches are to be applied against a release (QA or UAT) to fix an issue without requiring another app build.
When moving from 1 environment to the next, we apply all of the sprint patches and use Redgate SQL Compare to generate a create script of all of the objects that is then stored in TFS as a new starting point. The QA1 is rolled to QA2 and all of the sprint patches are also applied to QA1 and the app is deployed to QA1.
Once a milestone is reached in QA1, that release is deployed to UAT.
What I am interested in is others experiences in the development process. Is this on par with how you would do it? One issue that we are dealing with is how to 'Branch' the db projects in TFS.
Any input is definitely welcomed.
November 26, 2008 at 6:43 am
Keith West (11/25/2008)
I'm not sure if this is the correct forum for this but here goes.
As good a forum as any. This is a huge topic, but I'll try to address a few notes from our point of view. I do a lot of work with different development teams and we've worked through a few different agile methodologies, so I think I can shed a little more light than heat.
I am interested in opinions regarding the completeness of our methodology managing db changes during development.
Methodology: SCRUM with 3 week sprints
Tools: Visual Studio 2008, TFS, Redgate Tools, SQL 2005/2008
(Each developer has all of these tools installed locally)
We used to use the same tool set. About 18 months ago we switched to largely use Visual Studio Team System Database Edition for the database side of development.
What you have can work just fine. It's just a bit less fully automated.
From a db standpoint, we have 5 environments setup not counting production.
1) Sandbox - Used for nightly builds of the application
2) SmokeTest - Used when moving a release to QA
3) QA1 - This is the current QA environment
4) QA2 - This is the previous QA environment. When the app is rolled to QA, we keep the previous release available
5) UAT - We have a group of current clients that serve as an advisory panel that access this
All scripts are stored in TFS and in Database projects in Visual Studio.
Each developer can run local copies of the databases and develop and make changes as needed. When required, they can check in the scripts into the appropriate TFS project under folders defined for either sprint patches or release patches. Sprint patches are changes to objects related to the current sprint. Release patches are to be applied against a release (QA or UAT) to fix an issue without requiring another app build.
I'm not hearing anything about labeling releases. Identifying a known state is one of the most important things we've done. If you know what your database looks like at a point in time, then you can always recreate that point in time. That's pretty vital. Applying a label has to be a part of your database deployments. I'd suggest labeling the full set of database objects and then applying the same label to the differential script after you generate it. This should enable you to recreate your database from a given label and know which changes were applied between labels. Always build from a labeled version of the code so that you're focusing on that whole "known state" concept.
When moving from 1 environment to the next, we apply all of the sprint patches and use Redgate SQL Compare to generate a create script of all of the objects that is then stored in TFS as a new starting point. The QA1 is rolled to QA2 and all of the sprint patches are also applied to QA1 and the app is deployed to QA1.
I'm not crazy about rolling scripts. I find it's difficult to maintain. I'd rather see them rolled together into a single release and, yeah, the label applied. The goal is to build a script that gets tested over and over prior to going to production.
Once a milestone is reached in QA1, that release is deployed to UAT.
What I am interested in is others experiences in the development process. Is this on par with how you would do it? One issue that we are dealing with is how to 'Branch' the db projects in TFS.
Any input is definitely welcomed.
By and large it sounds like you're pretty well in hand. We store our database code with the application code. If they branch, so do we. If they branch, we have to have another environment for them to work on because you have to seperate databases (we won't create multiple databases on the same environment with different names... bad experiences there). Branching is the hardest thing because you have maintain those two code sets, but it's the same problem the developers have. It just requires a bit of discipline. The hard part isn't the branch itself, it's rolling up the branch after you're done.
The one point I like to emphasize, you're already covering. Treat your database like code and get it into source control. Build from a known state. Maintain the change scripts with the same label as the source code. That's about it.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
November 26, 2008 at 9:02 am
I agree with a lot of what Grant wrote, and while I haven't done this in awhile, the principles are the same.
Labeling releases is crucial to knowing what goes moving forward. Ideally your deployment from Sandbox to QA1 is the same script used to move to UAT. If it's not, you have control issues. I realize minor things might need to be added to the script as you find issues, but honestly you could go back and apply the corrective (or additive) code to QA1, then check it into the script used for release, then use that to deploy to UAT.
I worry about using a compare tool and getting changes moving from sandbox to QA that aren't intended, or are still "wet paint". I used comparisons more to double check what was in the release. You want to be careful as you move things around.
Haven't used TFS much, but we did a copy in VCS before of everything in a particular sprint/release and labeled it as a dated item (we had weekly releases). This way changes made to objects for that sprint were propogated to the main branch. Once we built scripts to deploy this to QA, we had to "branch" this sprint in VCS (remove the copy link) because those objects could potentially be modified again for the next sprint while QA was testing.
Does that help with the branch question or is there more?
December 1, 2008 at 10:24 am
Thanks for all of the input. I would like to add that we are labeling our releases. I just forgot to mention it in the scenario.
Once again, thanks. I will take your input to heart.
Keith
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply