January 3, 2009 at 10:10 am
Thanks Barry!
I like it!
CASE replaces an if/then/else block and the ; terminator issues the commands. Clever.
My one complaint is that it might be difficult to write in good error handling...I'll need to do more research using sp_executesql and this technique.
Thanks for the pointers.
One more quick question, how do you re-create a nested cursor with this method?
~BOT
Craig Outcalt
January 3, 2009 at 10:25 am
Thanks. Yeah, error-handling is a bit of a pain, but I have done it with this approach and on much more difficult ones than this (changing databases, which throws a lot of curve balls into it).
[font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
Proactive Performance Solutions, Inc. [/font][font="Verdana"] "Performance is our middle name."[/font]
January 3, 2009 at 12:18 pm
RBarryYoung (1/3/2009)
Thanks. Yeah, error-handling is a bit of a pain, but I have done it with this approach and on much more difficult ones than this (changing databases, which throws a lot of curve balls into it).
Heh... what an oxymoron... error handling is like buying death insurance... you're betting you're gonna die too early and the life insurance company is betting you're not. If you test data correctly, you never need row level error handling because you already know that all the data you're using is good.
--Jeff Moden
Change is inevitable... Change for the better is not.
January 3, 2009 at 12:42 pm
Oh no, not for this kind of thing. For instance, this is rebuilding or reorging for each fragmented index that it finds, but there is still stuff that can go wrong, either:
1) between when the command is generated and when it actually gets to it on the execution list: Like someone may have dropped the index. Or,
2) stuff that is impractical to check ahead of time, like there's a DDL lock on one of the tables because I've got the Table Wizard open on it and I forgot to close it.
[font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
Proactive Performance Solutions, Inc. [/font][font="Verdana"] "Performance is our middle name."[/font]
January 3, 2009 at 2:14 pm
Yeah... you're right, of course... I didn't think about those particular things... I think it's because I dream of a database where developers won't be dropping indexes out from under the DBA... that there will actually be "change controls" for such actions. Wishful thinking, huh? 😛
I still don't see why people think that the method you used would make error checking all that difficult. It would basically be done the same way as if a loop were running and one of the loops had dynamic SQL that failed. Maybe, I'm missing something.
--Jeff Moden
Change is inevitable... Change for the better is not.
January 3, 2009 at 2:33 pm
In my experience, the difficulty is all in figuring out how to stack and nest the strings during construction so that it comes out the way it would if you had written it directly. It takes some indirect thinking and it is a little unnatural, but not that hard, once you get used to it.
The most important tip I've found for this is to always PRINT @cmd, instead of EXEC(@cmd) during development. Makes it a lot easier to get it just right.
The other tough thing is counting the apostrophes, especially when you have to start nesting your contexts. I start to lose it when I get more than 2 deep and I have to throw anywhere from 3 to 16 apostrophes in a row. Ugh. :crazy:
[font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
Proactive Performance Solutions, Inc. [/font][font="Verdana"] "Performance is our middle name."[/font]
January 3, 2009 at 3:32 pm
I guess I do it backwards, then. I write out one "iteration" in regular code and test the willicurs out of it. Then, I just do a search and replace to double up all apostrophes, add a starting apostrophe and one at the end, to a couple of very simple replacements for the dynamic parts (QUOTENAME can be usefull here), and I'm done.
--Jeff Moden
Change is inevitable... Change for the better is not.
January 5, 2009 at 9:05 am
I dunno if this qualifies as a weird case where you need a cursor or not.
Certainly if this could be done as one big set based query, I'd go for it... but not sure how.
I wrote this little puppy when we needed to shut down a SQL instance with 4000 databases on it, move the databases to 4 seperate instances (1000 per instance) and reattach them.
CREATE PROCEDURE [dbo].[usp_attach_dir]
@dirname varchar(max)
AS
SET NOCOUNT ON
/***********************************************
Procedure: usp_attach_dir
Author: Mark Tassin
Date: 11/06/2008
Purpose: To attach all the databases in a directory that are not already attached to the SQL instance
Execution: sp_attach_dir ' '
Outputs: DBCC execution completed. If DBCC printed error messages, contact your system administrator.
The above message should repeat once for every database attached.
Requirements: SQL Server 2005 SP2+
************************************************/
/***** Test Data ************
DECLARE @dirname varchar(max)
set @dirname = 'E:\MSSQL\DATA'
********************************/
DECLARE @cmd nvarchar(max)
DECLARE @filename varchar(max)
DECLARE @dbname sysname
--Test for trailing backslash in the dirname parameter
IF right(@dirname,1) != '\' SET @dirname = @dirname + '\'
-- First we get a list of all the files in the input directory
CREATE TABLE #results(subdir varchar(max),depth int,isfile int)
set @cmd = N'insert into #results exec xp_dirtree ''' + @dirname +''',1,1'
EXEC sp_executesql @cmd
--Remove non-files that get picked up
delete from #results where isfile != 1
--Remove non-SQL data files
delete from #results where right(subdir,4) != '.mdf'
--Append the directory to the filenames
update #results
set subdir = @dirname + subdir
--Remove system database data files that are used by SQL server, but don't get actual database entries in sys.databases
DELETE FROM #results where subdir like '%\distmdl.mdf%' OR subdir like '%\mssqlsystemresource.mdf%'
--Create a table to store the output from sys.files for each db
CREATE TABLE #sysf(
[fileid] [smallint] NULL,
[groupid] [smallint] NULL,
[int] NOT NULL,
[maxsize] [int] NOT NULL,
[growth] [int] NOT NULL,
[status] [int] NULL,
[perf] [int] NULL,
[name] [sysname] NOT NULL,
[filename] [nvarchar](260) NOT NULL,
[dbname] [sysname] DEFAULT DB_Name()
) ON [PRIMARY]
--Get a list of all the files used by each db on the server
exec sp_msforeachdb '
insert into #sysf(fileid,groupid,size,maxsize,growth,status,perf,name,filename,dbname)
select fileid,groupid,size,maxsize,growth,status,perf,name,filename,''?'' from ?.dbo.sysfiles'
--Cursor to attach the dbs not on the server already
DECLARE csr_attachdbs CURSOR FAST_FORWARD FOR
SELECT
a.subdir
FROM #results a
WHERE
NOT EXISTS (SELECT * FROM #sysf b WHERE a.subdir = b.filename)
CREATE TABLE #fileinfo([property] sql_variant NULL, [value] sql_variant NULL)
OPEN csr_attachdbs
FETCH NEXT FROM csr_attachdbs INTO @filename
WHILE @@FETCH_STATUS = 0
BEGIN
SET @cmd = N'DBCC CHECKPRIMARYFILE (N''' + @filename + ''',2)'
INSERT INTO #fileinfo EXEC(@cmd)
SELECT @dbname = cast([value] as sysname)
FROM #fileinfo WHERE cast([property] as varchar)= 'Database name'
SET @cmd = N'CREATE DATABASE ' + @dbname + ' ON (FILENAME=''' + @filename + ''') FOR ATTACH_REBUILD_LOG'
EXEC sp_executesql @cmd
TRUNCATE TABLE #fileinfo
FETCH NEXT FROM csr_attachdbs INTO @filename
END
CLOSE csr_attachdbs
DEALLOCATE csr_attachdbs
DROP TABLE #results
DROP TABLE #sysf
DROP TABLE #fileinfo
Didn't care for using the cursor... but couldn't figure out how to use DBCC CHECKPRIMARYFILE in set based methods.
January 5, 2009 at 9:24 am
I will go on record saying that in my book this is a situation where a cursor is perfectly acceptable if not preferred.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
January 5, 2009 at 9:53 am
You certainly do not need a cursor for this. The same technique that I used above will work here also.
[font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
Proactive Performance Solutions, Inc. [/font][font="Verdana"] "Performance is our middle name."[/font]
January 6, 2009 at 2:48 pm
Jeff Moden (1/1/2009)
I believe that articles like this one serve as a great disservice to anyone new to databases and believe that it's horribly irresponsible of an author to try to bring any credibility to any form of RBAR programming in any RDBMS.
I am greatly opposed to cursors in T-SQL, but I have been told that in Oracle with PL SQL was optimized for cursors and that they often run faster than the set based solution. Is that correct?
---
Timothy A Wiseman
SQL Blog: http://timothyawiseman.wordpress.com/
January 6, 2009 at 6:22 pm
timothyawiseman (1/6/2009)
Jeff Moden (1/1/2009)
I believe that articles like this one serve as a great disservice to anyone new to databases and believe that it's horribly irresponsible of an author to try to bring any credibility to any form of RBAR programming in any RDBMS.I am greatly opposed to cursors in T-SQL, but I have been told that in Oracle with PL SQL was optimized for cursors and that they often run faster than the set based solution. Is that correct?
It's true that cursors in Oracle have been optimized to be pretty fast. The "proper" set based solutions still beat them, though. Now, the real problem with Oracle is that you can't return a result set from Oracle to a GUI in a direct fashion like you can with SQL Server. Instead, you have to write a "reference cursor" in Oracle. Those are really pretty damned fast and that may be where most of the reported cursor speed comes from.
It's been a bit since I've had to write in Oracle (THANK YOU GOD!!!), but the set based solutions I wrote were always faster than the equivalent cursor based solution someone else came up with.
--Jeff Moden
Change is inevitable... Change for the better is not.
January 6, 2009 at 7:04 pm
Extending a bit more on Jeff's comments, this is true not only for ORACLE (PL-SQL), but also for DB2 (SQL PL).
Like ORACLE, DB2 obligates you to use cursors in situations we would not even think they should be used.
Besides, DB2 official documentation has lots of examples that use cursors for nothing.
I have to say this really annoyed me when I started developing code for DB2.
January 7, 2009 at 8:45 am
Thanks Jeff and Wagner.
I took a class on database theory that used Oracle for all examples, but for the most part I have used SQL Server. Even in that class, the focus was on the theory such as normalization and standard data storage aspects more than any practical design.
---
Timothy A Wiseman
SQL Blog: http://timothyawiseman.wordpress.com/
January 7, 2009 at 8:52 am
mtassin (1/5/2009)
I dunno if this qualifies as a weird case where you need a cursor or not.Certainly if this could be done as one big set based query, I'd go for it... but not sure how.
I wrote this little puppy when we needed to shut down a SQL instance with 4000 databases on it, move the databases to 4 seperate instances (1000 per instance) and reattach them.
Didn't care for using the cursor... but couldn't figure out how to use DBCC CHECKPRIMARYFILE in set based methods.
Here is something similar I wrote a while ago for a similar (though much smaller, the most I've used it for is about 10 databases, also less general since we have naming conventions in place so I could safely assume certain things about the names) situation:
/* written by Tim Wiseman 11 Jun 08
This attaches every database in the path folder to the server instance.
It makes some assumptions about the name format of the database files,
but should catch the most common cases.
Useful for rebuilding servers. */
sp_configure 'show advanced options', 1
GO
Reconfigure
GO
sp_configure 'xp_cmdshell', 1
GO
Reconfigure
GO
declare @path nvarchar(250)
declare @cmd nvarchar(250)
Declare @dbname nvarchar(250)
declare @datafile nvarchar(250)
declare @logfile nvarchar(250)
Declare @sql varchar (8000)
set @path = 'D:\DataFiles\' --Change this to reflect the actual path
declare @FileList Table
(FileN varchar(250))
set @cmd = N'dir ' + @path + ' /b'
insert into @FileList
(FileN)
exec xp_cmdshell @cmd
While exists (select * from @FileList where FileN like '%.mdf')
Begin
select @dbname = replace(replace(replace(FileN, '.mdf', ''), '_data', ''), '.mdb', '') from @FileList
where FileN like '%.mdf' OR FileN like '%.mdb'
select @datafile = FileN from @FileList
where FileN like @dbname + '%.md%'
select @logfile = FileN from @filelist
where FileN like @dbname + '%.ldf'
if not exists (select name from sys.databases where name like @dbname)
begin
set @sql = '
CREATE DATABASE [' + @Dbname + '] ON
( FILENAME = N''' + @Path + @datafile+''' ),
( FILENAME = N'''+@Path + @logfile +''' )
FOR ATTACH
'
print @sql
Exec (@sql)
End
delete from @FileList
where FileN like @dbname + '%'
end --End while loop
select * from @filelist
This does use a loop, but it avoids explicitly using a cursor.
If you wanted a truly set based solution, this could be rewritten so it generates all the need commands, separated by semicolons, as one enormous @sql string and then simply executes that string.
---
Timothy A Wiseman
SQL Blog: http://timothyawiseman.wordpress.com/
Viewing 15 posts - 46 through 60 (of 87 total)
You must be logged in to reply to this topic. Login to reply