March 27, 2011 at 9:35 pm
Comments posted to this topic are about the item Changing Text in Multiple Stored Procedures From a Stored Procedure
March 27, 2011 at 10:50 pm
Thanks for the post - tried this out and works great. I always change any CURSORS which I find in code to a WHILE Loop though. There's not many times when you cannot do this - you just need a unique Id you can leverage. In this example you can use the DBId from sysdatabases. You then process each database in turn. They key is selecting a TOP 1 in dbid order so that they are processed in order without missing one out. See the code below to see how the generation step is replaced with a WHILE loop.
DECLARE @DB varchar(200), @sql varchar(2000), @dbid INT
SET @dbid = 0
WHILE EXISTS (SELECT TOP 1 * FROM master.dbo.sysdatabases WHERE name NOT IN('master','msdb','model','tempdb')
AND dbid > @dbid)
BEGIN
SELECT TOP 1 @dbid = dbid, @DB=name FROM master.dbo.sysdatabases WHERE name NOT IN('master','msdb','model','tempdb')
AND dbid > @dbid
ORDER BY dbid
SET @sql = 'WITH id_cte (id)
AS
(Select id From ' + QUOTENAME(@DB) + '.sys.sysobjects
Where Not name Like ''sp_MSsync%''
)
SELECT ' + QUOTENAME(@DB,'''') + ', id, colid, ctext
FROM ' + QUOTENAME(@DB) + '.sys.syscomments
WHERE id IN(Select Distinct id
From ' + QUOTENAME(@DB) + '.sys.syscomments
Where [text] Like ''%admission%'')
And id In(Select id From id_cte)'
INSERT INTO Identity_SProcs(dbName,id,colid,spText)
EXEC(@SQL)
END
March 28, 2011 at 1:56 am
Thanks for sharing 🙂
March 28, 2011 at 2:50 am
Sorry, but I am not happy with this solution at all.
My view is that any changes to procedures must be auditable. therefore, first off I would have identified the procedures that had @@identity.
Select object_name(id) from syscomments where text like '%@@identity%'
But my aim is to create an auditable rollout script so with the same query I would then amend it to:
Select 'Sp_Helptext ' + object_name(id) as result from syscomments where text like '%@@identity%'
You will also need to create the drop commands for later
Select 'Drop Procedure ' + object_name(id) as result from syscomments where text like '%@@identity%'
then put the results of the drop query at the top of your script for later along with the use database and any other rollout function you do. etc.
When you run the Sp_helptext query you should output the results of that SQL to a text file. Effectivley you are now creating a basic rollout script.
If you have chosen to leave in the headers for the query then with the script, you need to only search and replace Text with Go, and remove altogether the string '------------------' or whatever is being used to underscore the heading.
Once you have this script ready to execute, and of course you would never do anything like this against a live server at this stage, or without a backup, you can then perform your search and replace for @@Identity, replacing it with @@scope_Identity on the files contents.
You have now created the script for testing which should execute as necessary, but if not fix the errors that have occurred.
But you now have script that can be placed intou your version software database along with notes as to what is being changed, why it is being changed and that is vital for developers coming after you, You then release your changes as you would any other database update.
My point here is that if you must start with the database as your repository for the code, and I can understand why that is given that there are thousands of procedures for you to edit, there is no reason why you should not get back into the normal development flow at the earliest opportunity.
Ok, Discuss!
March 28, 2011 at 4:04 am
@Andy.carter 89422: although I'm not a fan of cursors at all and will always try to avoid them in favour of a SET-based solution, I sincerely think changing a cursor into a WHILE loop to be a bad idea. The while-loop approach (sometimes called 'a poor man's cursor') hardly ever brings benefits but potentially can cause the performance to drop noticeable. It all depends a bit on the amount of data to be fetched, as well as the layout and search-ability of said data. Hugo Kornelis did some research on this subject here.
In my real-world tests I've never had a situation where the WHILE solution was faster than the CURSOR solution. I know cursors are considered 'evil', but in fact it's not the cursors that are evil, it's their usage (line by line processing) that is the root of the problem. "Hiding" the word 'cursor' using a WHILE loop does not fix this but rather makes things more complicated and prone to bugs over time.
BTW, totally off-topic but I also think it's a good idea to have the INSERT () INSIDE the @sql variable instead of before the EXEC() statement. This will allow the optimizer to see both the INSERT and the SELECT as one big operation, while otherwise it would see them as sequential operations; doing whatever is in the EXEC first and taking those results to the INSERT in a separate operation. As an added bonus it avoids the dreaded "Unable to do nested INSERT EXEC" (or something along those lines) error when you (accidentally) try to nest this kind of statements, e.g. when calling a stored procedure that makes use of this and whose output you're trying to catch.
Sadly the 'Show Execution Plan' does not show the INSERT () EXEC() approach, for reasons I don't know, the entire statement seems to be omitted from the query plan completely ???? (SQL2008 here). You can show it by analysing the results from STATISTICS CPU & TIME, but it takes a bit of puzzling to see the operations are split, and (as expected) sum up to taking more cpu-time, more elapsed-time & more IO
Sample code (EDIT : BTW: yes, I know, the code is not 100% correct in relation to the article, it only finds the "lines" of code that have an m in it, not the entire code of the object. (in fact I started off from sql_modules, realised people might want to run this on older versions too and then forgot to adapt, sorry):
SET STATISTICS IO ON
SET STATISTICS TIME ON
DECLARE @sql nvarchar(max)
DECLARE @DB sysname
SELECT @DB = 'msdb'
SELECT @sql =
'INSERT INTO Identity_SProcs(dbName,id,colid,spText)
SELECT ' + QUOTENAME(@DB,'''') + ', c.id, c.colid, c.ctext
FROM ' + QUOTENAME(@DB) + '..syscomments c
JOIN ' + QUOTENAME(@DB) + '..sysobjects o
ON o.id = c.id
AND o.[name] NOT LIKE ''sp_MSsync%''
AND o.xtype = ''P''
WHERE c.[text] LIKE ''%m%'''
EXEC (@SQL)
TRUNCATE TABLE Identity_SProcs
SELECT @sql = '
SELECT ' + QUOTENAME(@DB,'''') + ', c.id, c.colid, c.ctext
FROM ' + QUOTENAME(@DB) + '..syscomments c
JOIN ' + QUOTENAME(@DB) + '..sysobjects o
ON o.id = c.id
AND o.xtype = ''P''
AND o.[name] NOT LIKE ''sp_MSsync%''
WHERE c.[text] LIKE ''%m%'''
INSERT INTO Identity_SProcs(dbName,id,colid,spText)
EXEC (@SQL)
which then results in
Table 'Identity_SProcs'. Scan count 0, logical reads 925, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 1, logical reads 3268, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'sysobjvalues'. Scan count 1, logical reads 293, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'syscolpars'. Scan count 1, logical reads 87, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'sysschobjs'. Scan count 2, logical reads 48, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 109 ms, elapsed time = 137 ms.
vs
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'sysobjvalues'. Scan count 1, logical reads 293, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'syscolpars'. Scan count 1, logical reads 87, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'sysschobjs'. Scan count 2, logical reads 48, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 47 ms, elapsed time = 55 ms.
Table 'Identity_SProcs'. Scan count 0, logical reads 919, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 1, logical reads 5146, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 78 ms, elapsed time = 98 ms.
March 28, 2011 at 4:30 am
lancebeacroft (3/28/2011)
Sorry, but I am not happy with this solution at all.My view is that any changes to procedures must be auditable. therefore, first off I would have identified the procedures that had @@identity.
I'm of a similar mindset.
Visual Studio's 'Replace in Files' command could be used to replace @@IDENTITY with SCOPE_IDENTITY() across all of the relevant script files, then use Red-Gate SQL Compare to compare the 'before' and 'after' script files to generate synchronisation scripts.
Much less risky IMHO, plus you will also capture occurrences of @@IDENTITY in any objects encrypted in the database.
Chris
March 28, 2011 at 5:33 am
We have to do something very similar, and I like some aspects of the OP's approach, but I'd like to share some experience, having done it daily for the last five years or so:-
- The word "CREATE" can occur in a lot of places, so you have to make sure that you replace "CREATE PROC" with "ALTER PROC".
- Because there can be an arbitrary number of spaces between CREATE and PROC, you have to use another fine tip from this very site to quickly reduce CREATE(\s+?)PROC to CREATE\sPROC (to use a regexpism).
- If you have RENAMED the stored procedure, then you have to be careful to re-execute the stored procedure using the NEW name, not the old. SQL Server does some funky re-writing in this case.
- Unless you are careful to script them out, the "SET" lines at the top and tail of the Procedure will be lost, so doing this can chance the ANSI_NULLS and other settings that were present when the SP was first created.
- General Note: There's an obvious problem with replacing common text. You could easily shred your Db by putting in the replacing of "CREATE" with "ALTER" across the board.... @CreatedDate would become @AlterdDate for example... So, try it out on a test Db first!
Hope this helps,
N.
March 28, 2011 at 5:40 am
With a little more research you would have found sys.sql_modules which has the advantage of the definition field being nvarchar(max) so sprocs > 8k do not need to be reassembled.
As posted your solution will not work in SQL 2k anyway since you use varbinary(max).
Also I do not understand why you are using varbinary instead of nvarchar. Then the built in replace function would work as expected.
All that being said: overall kudos for showing how a smart DBA can simplify complex tasks using SQL.
"Real DBAs don't write code, they write code that writes code for them"
March 28, 2011 at 5:47 am
Thanks nick,
My problem with the solution here is that the DBA is changing the code without the advantage of Alter or drop/create procedure. Would SQL server then regenerate the query plan for these amended procedures.
I can't think of a situation where I would use the posters solution. It feels somewhat 'cowboy' and slap dash for my liking.
L
March 28, 2011 at 6:03 am
As for doing things this way versus via scripts & source-control : I think the idea of making sure you get all the @@IDENTITY usages this way isn't all too bad. It has a couple of pitfalls though. For starters you'll miss out on the encrypted objects regardless of the implementation. Secondly, via syscomments you risk losing those that have @@ID on the first line and ENTITY on the second line (if you catch my drift), and then there's the risk of case-sensitivity (or not).
Personally I would rather go over all the source-scripts, update as needed and roll out again. Once finished with that, run the script of the poster again to identify any stray @@IDENTITY usages and either remove them from the database since they are obviously not under source-control and thus likely "illegal" or "obsolete". Or add & fix them in the source-control system and redeploy them to the databases again.
I guess it's a matter of what you're used to ... I've also done some 'the db is the source system' projects in the past, but I've been burnt enough to realise the value of a source-control system 😀 (or in fact it was closer to :crying: at the time =)
March 28, 2011 at 7:50 am
i think there's a great sql search tool that does that and even better
March 28, 2011 at 8:07 am
lancebeacroft (3/28/2011)
My problem with the solution here is that the DBA is changing the code without the advantage of Alter or drop/create procedure. Would SQL server then regenerate the query plan for these amended procedures.
Article
replaces CREATE PROCEDURE with ALTER PROCEDURE. It then prints the corrected stored procedure to the message window.
So the process just generates text for the scripts to alter the procedures. Which means it is using an alter on the proc so the plan will be regenerated.
lancebeacroft (3/28/2011)
I can't think of a situation where I would use the posters solution. It feels somewhat 'cowboy' and slap dash for my liking.
I do kind of agree with this. Far better to have all the scripts in source control and do the fine/replace in that then compare the versions of the files to make sure nothing got mucked with. However, Brian was likely in a situation where that wasn't possible and this is a good solution for situations like that.
March 28, 2011 at 8:10 am
Ahh, now we are in a different scenario. What or who would stop you adopting best practice?
March 28, 2011 at 8:15 am
I use something similar for searching all stored procedures. I run into a problem where the full procedure doe snot fit in the syscomments field. Will this work on really long procedures?
March 28, 2011 at 8:31 am
CornholioDBA (3/28/2011)
I use something similar for searching all stored procedures. I run into a problem where the full procedure doe snot fit in the syscomments field. Will this work on really long procedures?
You could use sys.sql_objects which has the sql-code in one big varchar(max)...
Or if you're on an older version of MSSQL you'll need to be creative in walking through the records and sticking the ends & starts of lines together to make sure you catch the situation where the string you're searching for is split across two lines.
(a bit of a puzzle, might be fun exercise for a sql-challenge =)
Otherwise the system is quite sound for searching IMHO. Just remember that encrypted objects will not be included in your search.
Viewing 15 posts - 1 through 15 (of 28 total)
You must be logged in to reply to this topic. Login to reply