May 6, 2010 at 7:41 am
irozenberg (5/6/2010)
Just a small suggestion - please consider EXTENDED properties for DB objects (including Stored Procedures), by using them in a key-value fashion you could avoid a lot of text processing.Regards from DownUnder.
I'm not sure what you mean?
May 6, 2010 at 7:44 am
jcaradocdavies (5/6/2010)
Just a small concern...As far as experience and memory serves, sp_helptext is not a reliable source for recreating SQL server objects. SQL management objects are the preferred source.
Changes to an object and/or metadata may render sp_helptext outdated - the script used to create the original object may not reflect the object in its current state. I seem to remember a case where the sp_helptext record was blatantly incorrect after a (?) designer edit.
Does anyone have a concise view of when this may occur, and whether stored procedures are for practical purposes immune to these cases?
The only problem I've found is that on rare occasions it does odd things with line breaks.
These become obviously evident when you try to run the script as the script will fail.
May 6, 2010 at 7:46 am
mike.renwick-894639 (5/6/2010)
I also had similar issues with sp_helptext and eventually found a way of scripting objects usingMicrosoft.SqlServer.Management.Smo.Scripter in Powershell.. this has the advantage that it handles all types of objects including everything from logins to linked-servers, mail profiles, etc, and can take advantage of easy iteration possible in powershell.
I've written a skeleton (and, disclaimer; is not warranted in any way) ps script that captures relevant source/schema stuff in SQL Server, if anyone's interested. Obviously it can be adjusted to taste for extended properties etc (which I turn off by default) and should be tweaked to expose specific properties relevant to only certain objects.
Link here
I also have a wrapper script that iteratively runs it for each database on the server, and a server-objects script that captures mail profiles, accounts etc, for anyone interested.
Regards,
Mike
The advantage of the method I've used is that it also creates a standard header for the script and in the version I use I add a call to another stored proceddure that writes to a ChangeCatalogue table with details of when the stored procedure was installed, the version and who ran the script.
May 6, 2010 at 7:48 am
mike.renwick-894639 (5/6/2010)
I personally haven't found a great deal of info on how to script objects using SMO which is why I think people end up doing slightly scary things like querying syscomments and other arcane solutions. From the snippets on the net we've been able to sort of work SMO out but it always seems to require a few extra steps of understanding to make use of, e.g. smo is a c# library seemingly, accessible from powershell, but could equally be rolled into a CLR storedproc.I've hated having to make uninformed choices as to where run this sort of stuff from or how to architect it to "best practice" quality standards.
Does anyone have a book to recommend that delves into the powershell scripting side of sql server more?
I don;t know much about SMO either. I think it's something I'll look into though. 🙂
May 6, 2010 at 7:54 am
Phil Factor (5/6/2010)
anything that involves a table schema change for example kind of makes the whole approach a bit complex and prone to problems
Right on the nail. If you can crack the problem of source control for tables, then you're probably home and dry. This is a problem that is ridiculously hard. Tables don't, and most certainly can't, have a source of the same type as a C# source, or a stored procedure. You can't compare tables merely from scripts. I can generate the same table using scripts that are really quite different in the way that they represent the table and its associated components, and I can reverse-engineer a table-build script in a number of different ways. The only effective way to detect a difference between two databases is to compare the metadata. Tables have a whole lot of dependent columns, indexes, constraints and triggers. How do you relate this lot together in the source-control model? Is a table, together with all its dependent components, one single object? If so, how come you can change it without requiring to check out, or 'reverse-engineer', the 'source'? How do you deal with something as potentially complex as a trigger or check constraint, which surely needs a separate identity in source control? (..and so on.. and so on ...) So often, folks move over from traditional programming and wonder why source-control is all so different with SQL Databases. It can be done, but it takes a great deal of head-scratching to get it right.
Yes that's one of the tougher areas of deployment of changes. We have a sequentially numbered set of 'incremental' scripts for DDL changes. All scripts are rerunable so for example if a new column is to be added they check for existance before trying to add the column, this means that script scn be rerun without causing errors e.g.:
IF NOT EXISTS(SELECT 1
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'myTable'
AND COLUMN_NAME = 'mycol1')
BEGIN
ALTER TABLE dbo.myTable
ADD mycol1 int NULL
...
This means that if the incremental files are all run in the right order the correct changes to the database should be made.
May 6, 2010 at 7:55 am
The built-in generate scripts was where we started from - we just got tired of having to manually set the same parameters each time, and the fact that it is absolutely impossible to automate. It's certainly better than nothing, but smo is the answer IMO.
You also have to think about what the purpose of version control is in your business.
Typically the drivers for VC might be:
-Enable concurrent development
-Track changes to the source base
-To "undo" a damaging change
VC as a broad concept in the SQL context is therefore covered by a few different parts: backup & recovery for one. I understand the futility of trying to roll back a change from schema- if something is so fundamentally screwed that you want to go to a previous version, you've probably corrupted data too, so your only option is a restore.
Equally, concurrent development is a relatively simple goal to achieve in SQL Server as long as your routines don't specify the database explicitly.
For us then, the driver for VC was to be able to capture the differences between a production schema and a development one, for example, or working out how to roll production changes into an unreleased development build, and to add some accountability to that process.
I can't see how the final integration wouldn't be a manual process anyway and wouldn't really trust anything that attempted to do this for me, so the best solution for us was to create a form of "state" capture. Schema capture via SMO and some simple table data capture for static/reference data changes. From there it's relatively easy to keep control.
Thanks for the book tip Grant.
May 6, 2010 at 8:10 am
This just seems silly overall. Why wouldn't you just create a project in SSMS and then save your scripts into the solution/project. The solution can then be easily added into your version control system. We use Subversion for this, but I'm sure any would work. Following the logic of this article makes me think that maybe I should be editing my reporting services rdl files with notepad instead of Visual Studio.
May 6, 2010 at 8:10 am
I've been using source control (currently Perforce) for ALL of my database development (tables, FK constraints, functions, stored procedures, views, triggers, etc.). I.e., everything. I also handle new database schemas as well as upgrades to existing schemas.
This methodology has worked very well for both SQL Server and Oracle and produces a nice clean checkin/checkout environment, a proper "build" process for the database schema, and a reproducable script (as part of the build process) that QA, etc. can use to produce consistent results.
See the following links to earlier threads where I describe the process.
Summary of the process:
http://www.sqlservercentral.com/Forums/FindPost474053.aspx
Build script:
http://www.sqlservercentral.com/Forums/FindPost833145.aspx
May 6, 2010 at 8:32 am
We use RedGate SQL Compare to create all the scripts for a DB and this structure is then held in Subversion.
We then develop using QA (or whatever suits the developer).
When we need to commit back to the branch then the first step is to compare the script folder with the development DB and synchronise any scripts that need it. Then you commit to subversion in the normal way.
We find this works well for our needs at the moment.
May 6, 2010 at 8:35 am
bphipps-931675 (5/6/2010)
This just seems silly overall. Why wouldn't you just create a project in SSMS and then save your scripts into the solution/project. The solution can then be easily added into your version control system. We use Subversion for this, but I'm sure any would work. Following the logic of this article makes me think that maybe I should be editing my reporting services rdl files with notepad instead of Visual Studio.
You still need to save your scripts to files somehow and that is one of the main purposes of the article.
May 6, 2010 at 8:38 am
I kept .dtsx and .rdl files out of this discussion -they already sit very naturally as files in a standard source repository. Obviously there's no point scripting them out of the database if you have the source already.
May 6, 2010 at 8:41 am
mike.renwick-894639 (5/6/2010)
I kept .dtsx and .rdl files out of this discussion -they already sit very naturally as files in a standard source repository. Obviously there's no point scripting them out of the database if you have the source already.
I agree. The Report .rdl files, along with the associated project file are treated just like any other VS project. The output of the build is just the .rdl files to be deployed.
May 6, 2010 at 8:45 am
But SSMS inherently gives you a save option. All you have to do is save them like any other file to your hard drive. I really don't understand why you have to go through all these extra steps to do that.
May 6, 2010 at 8:48 am
I'm not sure I understand? How do you save a stored procedure is SSMS?
Do you mean by clicking on "modify" or "script as create to new query window" to get the script up and then saving that?
May 6, 2010 at 8:51 am
bphipps-931675 (5/6/2010)
But SSMS inherently gives you a save option. All you have to do is save them like any other file to your hard drive. I really don't understand why you have to go through all these extra steps to do that.
If you have over 600 procedures to save that can be quite a chore, also this adds a header to the file and scripts so that a procedure only created if the procedure does not already exist. In our case this is important as if we were to drop and recreate the procedure we would lose any rights users have assigned to the procedures.
Viewing 15 posts - 16 through 30 (of 76 total)
You must be logged in to reply to this topic. Login to reply