May 19, 2009 at 5:17 pm
Hi Jeff.....I just read your solution regarding the sequence tables. Very good explanation. I have one minor concern (very minor). You have indicated having the transaction starting after retrieving the sequence values and I see that is to help avoid the deadlocking that may occur while the transaction is updating the table (could take a while to get access to the sequence table if you are updating several thousand rows).
Leaving the sequence table update out of the main transaction could potentailly leave gaps in your ID should the updates of the core data fail. Correct? e.g. Say you reserve IDs 1001-2000. You then go to update the core tables and the transaction fails.....the rollback will occur on the main tables but not on the sequence meaning the next time you go to do an update your sequence will start at 2001. Your main tables would then not have any IDs in the 1001-2000 range.
Now with that said, in many cases it wouldn't make any difference whether you lost IDs 1001-2000 or not, but if it did matter I suspect you would have to bring the sequence table update (stored proc call) into the transaction.
May 19, 2009 at 8:09 pm
Kevin Rathgeber (5/19/2009)
Hi Jeff.....I just read your solution regarding the sequence tables. Very good explanation. I have one minor concern (very minor). You have indicated having the transaction starting after retrieving the sequence values and I see that is to help avoid the deadlocking that may occur while the transaction is updating the table (could take a while to get access to the sequence table if you are updating several thousand rows).Leaving the sequence table update out of the main transaction could potentailly leave gaps in your ID should the updates of the core data fail. Correct? e.g. Say you reserve IDs 1001-2000. You then go to update the core tables and the transaction fails.....the rollback will occur on the main tables but not on the sequence meaning the next time you go to do an update your sequence will start at 2001. Your main tables would then not have any IDs in the 1001-2000 range.
Now with that said, in many cases it wouldn't make any difference whether you lost IDs 1001-2000 or not, but if it did matter I suspect you would have to bring the sequence table update (stored proc call) into the transaction.
Correct... Point well stated and well taken. If you absolutely need for a sequence to have no gaps, then you could include the 3 operand update to the sequence table in the transaction and the method I used (3 operand update) will certainly reduce the possibility of deadlocks to very near 0 over the lifetime of the table.
Still, as you inferred, preserving gapless sequences is normally a mistake of some sort and I'd take a very cold hard look at whether or not that actually needs to be done. If it does, then I'd take the extra step of prevalidating the data before I made the call to get the sequence numbers and, hopefully, prevent any chance of a rollback. Actually, I write code that way pretty much all the time because programming by exception and rollbacks is really expensive in any environment. I think that things like Try/Catch have made some folks real lazy about validating data. It all falls into the category of "see what sticks" and promotes the use of RBAR for single row "Tries". 😀
--Jeff Moden
Change is inevitable... Change for the better is not.
May 20, 2009 at 9:12 am
If I remember correctly is this not how sequence generators in Oracle work?
ATBCharles Kincaid
May 20, 2009 at 11:06 am
Charles Kincaid (5/20/2009)
If I remember correctly is this not how sequence generators in Oracle work?
Is that word allowed on this forum 😀
May 20, 2009 at 11:33 am
Kevin Rathgeber (5/20/2009)
Charles Kincaid (5/20/2009)
If I remember correctly is this not how sequence generators in Oracle work?Is that word allowed on this forum 😀
If we don't mention them how can we trash them?
ATBCharles Kincaid
May 20, 2009 at 3:16 pm
Charles Kincaid (5/20/2009)
Kevin Rathgeber (5/20/2009)
Charles Kincaid (5/20/2009)
If I remember correctly is this not how sequence generators in Oracle work?Is that word allowed on this forum 😀
If we don't mention them how can we trash them?
Yes and exactly. 😛
--Jeff Moden
Change is inevitable... Change for the better is not.
July 28, 2009 at 9:05 am
I am trying to replace a cursor in a function that converts rows to cols also. Based on the data I am not expecting more than 6 IDs for a row, To be on safe side I want to say 25: this is a number we should never got to for the ID: I need to combine all the rows to 1 return.
I got lost in 27 pages of discussions: Is this a recursive procedure ? Below is my SP which is fairly simple: 1 table; 1 select; no joins. Sounds straight. I am not sure how pivot table will work in my case. This being a function, I will embed in a select stmt.
CREATE Function [dbo].[myfunction](
@id char(50)
)
RETURNS varchar (3000)
AS
BEGIN
Declare @ret varchar (3000) , @val varchar(255)
SELECT @ret = ''
DECLARE a_curs CURSOR FOR
SELECT Value FROM some_table
where id = @id
OPEN a_curs
FETCH NEXT FROM a_curs INTO @val
WHILE (@@FETCH_STATUS = 0)
BEGIN
SELECT @ret = @ret + Ltrim(RTrim(@val))
FETCH NEXT FROM a_curs INTO @val
END
CLOSE a_curs
DEALLOCATE a_curs
IF @ret = '' OR @ret Is Null
SELECT @ret='NOTFOUND'
Return @ret
END
February 18, 2010 at 7:55 am
athornicroft (2/18/2010)
Hi there is a good thread here about eliminating cursors without the use of sub queries.
Replacing a cursor with a while loop isn't really an improvement, it's still RBAR.
---------------------------------------------------------
How best to post your question[/url]
How to post performance problems[/url]
Tally Table:What it is and how it replaces a loop[/url]
"stewsterl 80804 (10/16/2009)I guess when you stop and try to understand the solution provided you not only learn, but save yourself some headaches when you need to make any slight changes."
February 18, 2010 at 7:58 am
jcrawf02 (2/18/2010)
athornicroft (2/18/2010)
Hi there is a good thread here about eliminating cursors without the use of sub queries.Replacing a cursor with a while loop isn't really an improvement, it's still RBAR.
Row By Agonizing Row???
What would be a more efficent solution then?
February 18, 2010 at 8:19 am
simplest answer might be to replace the loop with a join to a Tally table, see Jeff Moden's article on the subject: http://www.sqlservercentral.com/articles/T-SQL/62867/
There may be a better way to do this particular task that is being addressed in that link, that I would leave to the smart folks that hang around here.
---------------------------------------------------------
How best to post your question[/url]
How to post performance problems[/url]
Tally Table:What it is and how it replaces a loop[/url]
"stewsterl 80804 (10/16/2009)I guess when you stop and try to understand the solution provided you not only learn, but save yourself some headaches when you need to make any slight changes."
February 18, 2010 at 8:58 am
I've written a way without using cursors, neither temporary tables.
I'm using dynamic sql.
But in this case I would use a variable table (similar as the link you posted) because the amount of rows should be very few (less than 100?), the code is cleaner and more secure. And of course, the overcost of using a variable table in this case is nothing, compared to the amout of time of backups...
Josep
DECLARE @Path VARCHAR(256) -- path for backup files
DECLARE @FileName VARCHAR(256) -- filename for backup
DECLARE @FileDate VARCHAR(20) -- used for file name
SET @Path = 'C:\Temp\'
SELECT @FileDate = CONVERT(VARCHAR(20),GETDATE(),112)
DECLARE @sql NVARCHAR(MAX)
SELECT @sql = ISNULL(@sql,'') + 'BACKUP DATABASE ' + Name + ' TO DISK = ''' + @Path + Name + '_' + @FileDate + '.BAK'''
FROM master.dbo.sysdatabases
WHERE Name NOT IN ('master','model','msdb','tempdb')
PRINT @sql
EXEC sp_executesql @sql
February 18, 2010 at 9:20 am
Your dynamic sql example seems to be a one off or just a script you run, however, just a note as a developer and from an applications point of view; you want to avoid dynamic sql at all costs. Say you had something as simple as a stored procedure called StoredProcA that selected data from a table called TableA. When using dynamic sql you have to grant the user who runs the stored procedure StoredProcA exec permissions (as would be expected), but you also have to grant select permissions on TableA in order for the dynamic sql to work. As a developer you want to restrict users from accessing tables directly and thus only grant permissions to the stored procedures.
February 18, 2010 at 9:26 am
I agree with Kevin's comments regarding avoidance of dynamic SQL. But, even if those issues were not a problem, I would still just leave the original cursor loop alone. I think most DBAs would find it to be more readable and easier to maintain than a dynamic SQL solution and the overhead of a cursor is negligible compared to the execution time of BACKUP DATABASE. To illustrate my point, look at the execution time of the cursor loop with the BACKUP command commented out. I'm betting those milliseconds are insignificant compared to the execution time required to backup multiple databases.
February 18, 2010 at 9:32 am
Kevin and Andy,
I wouldn't like to be missunderstood: I absolutely agree with you. I only use dynamic sql when I cannot avoid it, as a last option. It has so many cons as you've said.
I wrote it as "another way to do it" but I also recommended to use the loop script with a @table (table variable) instead of dynamic sql.
Regards,
Josep
Viewing 15 posts - 256 through 270 (of 296 total)
You must be logged in to reply to this topic. Login to reply