This article is part 3 of my series on version control.
- Part 1 - Dealing with Code
- Part 2 - Tracking Changes
- Part 3 - Migrating Objects to Production
- Part 4 - Rolling Back
Introduction
Change management is a key feature of a stable system. Indeed, it is what everyone from mainframe programmers to Microsoft recommends for a production system. SQL Server as much as any other part of your application requires that you effectively manage change to maintain stability. While this will not eliminate bugs, it will help to reduce them.
In this article, I will examine how I handle moving the actual changes from the development environments to the QA and production environments. The first section of this article repeats a bit of background from the other articles. If you have read them, skip to the My Process section.
My Environment
Before I delve further, let me explain the type of environment that I have used in each of the last few jobs. Basically I have (at least) three systems setup for each application. These are not necessarily, but usually are three separate servers. These systems are defined as follows:
- Production - The "live" system that is actually being used by users. This is the data of record, the system that runs the database, accepts orders, etc. and runs the business.
- QA - A copy of the production system that is used for testing. This is usually the system that is most often updated (overwritten). Developers should not have access to this system, instead this should be exclusively given to the testing or Quality Assurance group. Integration and system testing occur here.
- Development - The "sandbox" or "playground" for the development group. This is where items are developed, tested, and verified by developers before being submitted to formal QA. This would be where unit testing and developer integration testing occurs.
- Beta (optional) - A copy of the production system where clients can perform testing. This is exactly like the QA environment. This is a playground for clients.
- Demo - The last couple companies have needed a "demo" system for the salespeople to use. This functions like the Beta system, except that it is rarely overwritten (from a data perspective).
Changes in the SQL environment are new and altered objects, data seeds, imports of data, etc. All of these changes are first made in the development environment. They are then applied to the QA environment where the intended effects can be verified by our QA environment before being migrated to the other environments.
It is my job to manage the changes as they move from one system to another, along with the QA person, though in actual practice, I "build" a package that migrates change from the development environment to the QA environment. The QA person is then responsible for running my "package" against the other systems to apply the changes. This article discusses packages.
My Process
In a previous article, I showed how I store items in our version control system, in our case Visual SourceSafe. I will refer to it as VSS here for short. In the followup to that article, I mentioned how I tracked the changes that need to be moved to the production systems. Now I will examine how I actually handle moving those changes among my environments.
I build units of deployments that I will call "packages". These packages contain a series of SQL scripts that are applied against a server. The scripts may create or alter tables, apply security, create stored procedures, etc. I have even included DTS packages in my packages.
In the last article(Track Your Changes), I showed how I track all changes as either "In Flight" or on a specific release date (or enhancement). I have used this technique in both web and client-server environments. Once I have identified a specific project in VSS as being ready for release, I add a specific file to the project. I named this file 0Upload.cmd so that it would sort at the top of the project (by name), but the name is not important. This file is edited to the project (using share & branch) from the main project and checked out for editing.
Before I explain how it works, let me say I chose to use a command file (.cmd) because it offers a number of advantages and because over the years I have found it to be the most reliable and the easiest to deploy for another individual. I don't expect this to work for applications, web sites, etc., though it could.
Here is a template of my file:
rem rem Rollover Script - (date) rem rem Parameters Required: rem %1 UserID rem %2 Password rem %3 Server rem %4 Database rem rem Changes rem rem New tables () rem ------------ rem rem rem Alter Table () rem ------------ rem rem rem Views () rem ------------ rem rem FKs () rem -------------------- rem rem Functions () rem --------------- rem rem Sprocs () rem --------- rem rem Data Updates () rem -------------- rem rem Indexes () rem ---------------------- rem rem Other () rem ---------------------- rem rem rem template rem osql -U%1 -P%2 -S%3 -i"xxx.sql" -d%4 >> RolloverErr.txt if exist rollovererr.txt del rollovererr.txt rem New Tables rem Alter Tables rem Views rem FK rem Functions rem SP rem data alters rem indexes rem other notepad rollovererr.txt
Kind of long, but really simple. First let's look at each section. After I have added this template to my project (using share & branch), I edit it with notepad. First I change the date to match the release date (projected) or the application (if this is a specific version release). Once this is done, I now add in the names of each script that is needed in the appropriate section. Table scripts go in the table section, Foreign Keys in the FK section, etc. Note that I add the script name in the top sections where all the REM statements appear. As a sample, suppose I needed to add a new table (ProductDetails), a new procedure (ProductDetails_Insert) and seed this table using my "productdetails_seed.sql" script. Each of these would be in VSS as follows: Given this release (2001-11-14), I would have the following header in my 0Upload.cmd file:
rem rem Rollover Script - 11-14-2001 rem rem Parameters Required: rem %1 UserID rem %2 Password rem %3 Server rem %4 Database rem rem Changes rem rem New tables (1) rem ------------ rem ProductDetails.sql rem rem rem Alter Table () rem ------------ rem rem rem Views () rem ------------ rem rem FKs () rem -------------------- rem rem Functions () rem --------------- rem rem Sprocs (1) rem --------- rem ProductDetails_Insert.sql rem rem Data Updates (1) rem -------------- rem ProductDetails_seed.sql rem rem Indexes: rem ---------------------- rem
I have added the name of each file to the appropriate sections and totaled the number of items in each section.
The next step is to move these sections down into the main file. I do this section by section by cutting and pasting the entire section from the header into the matching section in the body. For the tables section this would give:
rem New Tables rem ProductDetails.sql
I know remove the REM and add the beginning of my template to this line. This gives:
rem New Tables osql -U%1 -P%2 -S%3 -i"ProductDetails.sql
And the end from my template line.
rem New Tables osql -U%1 -P%2 -S%3 -i"ProductDetails.sql" -d%4 >> RolloverErr.txt
This gives me a command line that will run this script using the osql utility. I repeat this for each section.
Now I know this seems like a lot of work. Well, I actually have a VBS script that will generate this file for each file in a directory. This quickly builds a 0Upload.cmd file for a release once I check out all items into a folder.
How does my script separate the tables from the stored procedures? It doesn't. While I have coding standards, I don't expect them to be followed so rigidly (especially for legacy code) and can't depend on any thing to be sure I separate different types of objects.
I do have to manually reorder my auto-generated 0Upload.cmd file each week, but the cut and paste goes pretty quick. For a large release (containing 150 objects), the editing of this file takes less than half an hour.
Still with me? Ok, this is the last part.
Now that I have this file built, I total the number of objects and compare this with the total that VSS shows for the project. I should have one more object in VSS (for 0Upload.cmd). If this matches, I am done with this file.
Now I check in this file (that's why I use VSS). When we are actually ready to deploy this, I check out all the items into an empty folder. I then open a command prompt in this folder and type the following:
c:\release>0Upload myuser, mypassword, myserver, mydatabase
and hit <Enter>
Tada, my release is applied against the database (mydatabase) on the server (myserver) using the login (myuser) and password (mypassword). When the release is complete, a text file appears on my screen with the results from the release. I can search this for errors and save it with the release documentation (or check it back into VSS).
Let's briefly look at what happens when I run this. The first executable line is the following:
if exist rollovererr.txt del rollovererr.txt
This line merely deletes my error file if it exists. Since I sometimes run into deployment errors, I may restore my QA database a few times and rerun this same package. I don't want to dig through multiple copies of the error file, so I remove it automatically.
Each line in the next sections looks like the template
osql -U%1 -P%2 -S%3 -i"xxx.sql" -d%4 >> RolloverErr.txt
When this line is run, the %1, %2, etc. are replaced by the parameters that are passed into the batch file (myuser, mypassword, etc.). For my execution, what is actuall run for the table script is:
osql -Umyuser -Pmypassword -Smyserver -i"ProductDetail.sql" -dmydatabase >> RolloverErr.txt
This will connect to myserver using myuser and mypassword. The mydatabase database will be selected and the commands in the file ProductDetail.sql will be executed, just as if you typed them in Query Analyzer. This is repeated for each line that begins with osql.
The last section of this file merely pops open notepad with the error file, mainly so I know the script is finished and I can stop answering questions on SQL Server Central :).
Why It Works?
The best example I can give is a recent release that I had to build for our main system. We usually release every week, but on this particular week, we needed to release some tools early so our users could enter some data prior to us releasing some additional changes. Of course, I was on vacation the end of the previous week and the rest of the development team decided this without me. As a result, I got told at our Monday release meeting we were releasing two packages, one that afternoon and one a couple days later (subject to QA approval, of course).
Was I scrambling.
I first made a new folder in VSS for the new date. After I spent a few minutes with the developer figuring out which objects needed to go, I dragged these to the new project using the "Move" option. Once I had moved these options, I made a copy of the release command file and added it to the new project.
Now, the only tricky part is matching up the items. Since this was a split, I declined to use my automated tools since this was a split project. I first removed the itms from the header (rem) statments. Once this was done, I removed the actual calls in the file. At this point, I counted the numbers in each section to ensure that all changes were accounted for. After this, the only cleanup was to edit the original file in the reverse and remove the items that were begin sent.
Total time: 15 mintes, most of which was spent counting numbers of objects of each type.
Conclusions
Why was this easy? (Or why did I think it was?). For one, there are no dependency problems. I do not have to worry about DLLs, installing any software, etc except for Query Analyzer, which is on all machines I have. Two, it's portable. Since we release things at night, if I need to be involved, I can zip up the entire project from VSS and email it to myself or copy it to a disk. I then have everything I need to deploy the changes. I can easily send this to my QA person as well. The third thing is that I can run this from anywhere. I don't need to be at my workstation, at the server console, or even on the LAN. As long as I can connect to SQL Server, I can release this package.
I know there are some third party tools to help manage this. I used one (won't mention the name) a few years ago, but the VSS integration wasn't very good and it still didn't help with the releases. This has been a great, simple, painless, and reliable method for me to release changes for over a year. In that time, I've used this to release about 70 times (we release once a week, sometimes more).
As always I welcome feedback on this article using the "Your Opinion" button below. Please also rate this article. If you know of a better method or tool, please let me know.
Steve Jones
©dkRanch.net November 2001