October 29, 2019 at 11:07 am
This was removed by the editor as SPAM
October 29, 2019 at 11:13 am
In my current role, Scripts are executed by the Release Team, so it's difficult to hand them something that requires input e.g. changing the @doCommit value to commit the update etc. However, getting an execution report back from them is easy. For this reason, I tend to use something similar to:
SET NOCOUNT OFF;
DECLARE @SelectCount INT, @UpdateCount INT, @AccountCodeToUpdate NVARCHAR(2)
SET @AccountCodeToUpdate = '4c'
-- Begin Script Task
SELECT @SelectCount = COUNT(1)
FROM [dbo].[Account]
WHERE AccountCode = @AccountCodeToUpdate
SELECT @SelectCount AS [SelectCount]
BEGIN TRAN [UpdateTran]
BEGIN TRY
UPDATE A
SET A.Active = 0
FROM [dbo].[Account] A
WHERE A.AccountCode = @AccountCodeToUpdate
SET @UpdateCount = @@ROWCOUNT
SELECT @UpdateCount AS [UpdateCount]
IF @SelectCount = @UpdateCount
BEGIN
SELECT 'COUNT MATCH - TRAN COMMITTED'
COMMIT TRAN [UpdateTran]
END
ELSE
BEGIN
SELECT 'DIFF COUNT - TRAN ROLLBACK'
ROLLBACK TRAN [UpdateTran]
END
END TRY
BEGIN CATCH
SELECT 'ERROR CATCH - TRAN ROLLBACK'
ROLLBACK TRAN [UpdateTran]
END CATCH
October 29, 2019 at 11:17 am
Minor tweak, but if it's SQL Server you could possibly replace
UPDATE account SET active_yn =0 WHERE account_code='4c'
SELECT 'AFTER' 'AFTER', * FROM dbo.account WHERE account_code='4c'
with
UPDATE account SET active_yn =0 WHERE account_code='4c'
OUTPUT 'AFTER' 'AFTER', INSERTED.*
October 29, 2019 at 11:21 am
In my current role, Scripts are executed by the Release Team, so it's difficult to hand them something that requires input e.g. changing the @doCommit value to commit the update etc. However, getting an execution report back from them is easy. For this reason, I tend to use something similar to:.....
I like this - I might just add an "OUTPUT deleted.*" into the update statement
MVDBA
October 29, 2019 at 11:21 am
Like Freddie, I have developers save the data to be modified to a utility database on the same server. This has a few benefits. The SELECT INTO query can be used to confirm the modification logic, rollbacks are relatively easy and there's a saved history of data modifications.
Also, you can create a naming convention for the new tables in the utility database, like TableName_TicketNumber, for high traffic tables that get more frequent modifications.
I don't have them save into the same database because then you end up with a database with all sorts of cruft.
October 29, 2019 at 11:38 am
I've used a similar method many times. All it takes is trashing a database once to learn the value of writing the where clause in a select statement before updating or deleting. If it is a big change you can also back up the table with an Insert into <tblname_backup>. Just don't for get to drop the table once the data is verified.
October 29, 2019 at 12:04 pm
Depending on the table size and the amount of data changed/deleted/updated I use output inserted, output deleted into a table in another database. When I have reviewed the changes or deletions I remove the table that I created with the backup data.
Most of our changes are on the test system first, then BA and last on the production server, but I am aware that this is not always possible. Changes made have at least another pair of eyes while making the changes as well.
October 29, 2019 at 12:10 pm
Just my 2 cents...
I've always done updates the way you've posted. As some have said, it's saved my keester more than once.
I also hammer (in a mentor/nice way) the method onto the Devs and they get it. For those that worry about having both Rollback and Commit being commented out, I'll take the chance there for several reasons...
The error was discovered the next morning... after the reporting server (which has copies of all the production tables) was updated. They went to do a restore because the damage was propagated by the nightly runs... yeah... that's when they also found out that backups had been silently failing for about 3 weeks.
It took 40 people more than 10 days to get us back into business by reloading 3 weeks of files in the correct order, doing the processing on them, making a shedload of manual entries from source documents, etc, etc, and it was never 100% correct for about 6 months.
A lot of things changed at that company because of that episode including them suddenly following the full development and deployment life cycle that I'd been trying to pound into their heads for a couple of years. Most importantly, hard-code rowcounts for updates along with manual Rollback/Commits were always required because of the failure we discovered (and was documented in a CONNECT item) clearly demonstrated that we couldn't rely on SQL Server for such things.
And, I'll gladly "take the hit" when a Developer sometimes forgets to do a Rollback/Commit on the Dev box. That causes a whole lot less damage (none) than a UPDATE going awry, even in Dev. I'll have some fun with a Developer that does sometimes forget but it's always done in a friendly manner and I NEVER "come down hard on" a Developer that has done such a thing.
Shifting gears a bit, we're starting to join the "Big boy's club". We have several databases teetering on the 1TB mark and two teetering on the 2 TB mark and several tables in the 100 to 300 GB range. I've made it so that some of these larger tables live in the own databases and are referenced from the main database by synonyms. Some of those tables have actually been split to multiple "sub-tables" using one form of partitioning or another and some of the tables have actually been split to separate databases using partitioned views. Of course, the whole reason for that is to be able to very quickly do "get back in business" partial restores of one form or another. Of course, I've also been able to make some of the legacy partitions (coming up to 120 monthly partitions on one table that we have to "keep the data forever" on) READ ONLY, which also has the benefit of greatly decreasing the time and space it takes to do nightly full backups.
--Jeff Moden
Change is inevitable... Change for the better is not.
October 29, 2019 at 12:12 pm
We backup the data we are going to change to a DB for that purpose only. We back up the key and any field we are going to modify, we name the backup table for the Ticket we are working from. We can use that for rolling back the data. We also have a SQL agent job that we run weekly that drops tables that are in that database that were created more that 6 months ago.
October 29, 2019 at 12:42 pm
Hello,
It is a good point to write out the updated/Deleted in XML format (or a common format) into an audit table to track the changes at a later date. This will work fine for smaller changes.
Typically, when I run a script, I 'set implicit_transactions on' as the first thing in the script. After the DML, changes can be committed or rolled back depending on the number of records updated. This will work well where there is separation of duties, where developer cannot run updates in production.
For a major change, I try to take a back up of the table, or backup a subset based on the predicate used for update.
October 29, 2019 at 12:46 pm
Before making any data change, I backup the table. Always - no matter how "trivial" the change. I keep these table backups for a period of time before dropping them.
I also like Temporal Tables, but I'm not sure how well that solution would scale when needing to handle thousands of tables.
October 29, 2019 at 12:48 pm
Where I work, I encourage people to never write a block of code which has an OPEN TRAN without a ROLLBACK or COMMIT. So instead of this:
SELECT * FROM dbo.account WHERE account_code='4c'
SELECT @@ROWCOUNT
BEGIN TRAN
UPDATE account SET active_yn =0 WHERE account_code='4c'
SELECT @@ROWCOUNT
--rollback tran
--commit tran
I would have this:
SELECT * FROM dbo.account WHERE account_code='4c'
SELECT @@ROWCOUNT
BEGIN TRAN
UPDATE account SET active_yn =0 WHERE account_code='4c'
SELECT @@ROWCOUNT
rollback tran --change to commit if you really mean it
--commit tran
The reason is that on more than one occasion script executors have mistakenly forgotten to uncomment either option- the rollback or the commit- and leave a transaction hanging out there chewing up the version store, tlog and in the worst cases blocking. Better to just rollback by default.
October 29, 2019 at 1:19 pm
I like to teach our younger DBA's and Developers is that UPDATE and DELETE statements are simply SELECT statements with an added feature. My favorite use of this idea is to change my SELECT statement into the UPDATE/DELETE instead of writing a whole new copy of that query below. For Example:
--Start with this
SELECT *
FROM dbo.Account a WHERE account_code='4c';
--***********************************************
--Run the above statement and if it returns only the rows
--you want to change then add the UPDATE/DELETE statement
--along with the TRAN pieces for added security
--Edit above statement to become this
BEGIN TRAN
UPDATE a
--SELECT *
FROM dbo.Account a WHERE account_code='4c';
--commit tran
--rollback tran
October 29, 2019 at 1:57 pm
This was removed by the editor as SPAM
October 29, 2019 at 1:59 pm
This was removed by the editor as SPAM
Viewing 15 posts - 16 through 30 (of 77 total)
You must be logged in to reply to this topic. Login to reply