January 25, 2011 at 4:36 am
A very interesting article, which has stirred up some great debate. I know use BEGIN TRANSACTION whenever running updates - even on test.
What would be great is an easy way of backing up (and restoring) just a table... maybe in Denali?
January 25, 2011 at 4:40 am
Rob Fisk (1/25/2011)
Rowcount can be greater than the number of rows listed in the statistics if the statistics have not yet been updated for recently added rows.
sys.dm_db_partition_stats is not based on index statistics. It's the storage engine's information on the total rows in the table and should be correct in the vast majority of cases.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 25, 2011 at 4:43 am
jts_2003 (1/25/2011)
What would be great is an easy way of backing up (and restoring) just a table... maybe in Denali?
No, and I doubt it will ever be a native feature. It allows the database to be in a transactionally inconsistent state. Lots of the 3rd party backup tools can do that, and others allow the backups to be 'mounted' as if they were actual databases (RedGate's Hyperbac for eg)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 25, 2011 at 4:49 am
GilaMonster (1/25/2011)
jts_2003 (1/25/2011)
What would be great is an easy way of backing up (and restoring) just a table... maybe in Denali?No, and I doubt it will ever be a native feature. It allows the database to be in a transactionally inconsistent state. Lots of the 3rd party backup tools can do that, and others allow the backups to be 'mounted' as if they were actual databases (RedGate's Hyperbac for eg)
I keep a DB called Guff I use for importing data I need to compare to data in my other DBs (before I arrived the old way was just to import them to the production DBs so there were tables like stuffJan, GavsTable3 all over the shop).
If I am about to muck about with lots of updates to a table I often just export it to Guff beforehand so that I can run data comparisons and so on afterwards.
[font="Arial"]_______________________________________________________
Change is inevitable... Except from a vending machine.[/font]
January 25, 2011 at 5:20 am
jts_2003 (1/25/2011)What would be great is an easy way of backing up (and restoring) just a table... maybe in Denali?
I'm pretty sure that this feature was available in 6.5. It was removed in 7 and onwards.
One of the problems for table restores is in handling rows with foreign key relationships.
January 25, 2011 at 5:21 am
Oracle's query tools (Oracle SQL Developer and SQLPlus) by default require you to explicitly commit regardless of whether you explicitly began a transaction.
So, if you perform an update/delete operation, you hold a lock on the table(s) and must see the result count and execute a commit before it's irreversable. Would be nice if SSMS had a similar option (I guess it would have to be a non-default one as it would cause confusion to so many people) where each batch effectively submitted a BEGIN TRANSACTION - maybe with some sort of visual indicator of your transaction count in each query window.
Having said that, there'll always be new ways to destroy data accidentally if people are given access.
January 25, 2011 at 5:33 am
HowardW (1/25/2011)
Would be nice if SSMS had a similar option (I guess it would have to be a non-default one as it would cause confusion to so many people) where each batch effectively submitted a BEGIN TRANSACTION - maybe with some sort of visual indicator of your transaction count in each query window.
Tools-> Options. Expand Query Execution -> ANSI. Check SET IMPLICIT_TRANSACTIONs
Voilà, Oracle-style behaviour (first change starts a transaction, explicit commit/rollback required to end it)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 25, 2011 at 5:46 am
Ah hah - thanks Gail!
January 25, 2011 at 6:39 am
HowardW (1/25/2011)
Oracle's query tools (Oracle SQL Developer and SQLPlus) by default require you to explicitly commit regardless of whether you explicitly began a transaction.
Well, SQL Plus is a little awkward here. If you use EXIT with no other options, SQL Plus commits per default as well. I had to learn this the hard way 😉
See http://download.oracle.com/docs/cd/B10501_01/server.920/a90842/ch13.htm for details.
But what I don't like is the idea of triggers to avoid programmers errors in PROD. They may be helpful during testing or development, but again, not in PROD.
And as others mentioned - accidential deletes can be avoided or fixed by transactions (while the transaction is active) or via database backups.
Best Regards,
Chris Büttner
January 25, 2011 at 7:04 am
GilaMonster (1/25/2011)
luciano furlan (1/25/2011)
I fear every day, that I will forget the where clause eventually and destroy an entire table.And guess what: Sql doesn't have an Undo button!
If you're doing anything on a production server, the first command in your query window and the first you run should be BEGIN TRANSACTION. That way if you do accidentally leave out (or not select) the where clause, you can roll it back.
I agree. The solution does not take into account cases where the number of legitimate updates equals the number of rows in a table.
I applaud the author for showing how you can create a bunch of triggers automatically. The code can be used when you want to create a basic trigger on a lots of tables. It saves you a lot of time.
Otherwise, the logic is false.
January 25, 2011 at 7:10 am
You might not want this to be installed when you are first starting to use the table. If the table has only one row, even with a valid where clause, your trigger will fire.
January 25, 2011 at 7:37 am
hilld (1/25/2011)
You might not want this to be installed when you are first starting to use the table. If the table has only one row, even with a valid where clause, your trigger will fire.
Or a simple upgrade: AND @@ROWCOUNT > 1
[font="Arial"]_______________________________________________________
Change is inevitable... Except from a vending machine.[/font]
January 25, 2011 at 7:47 am
This could be a good idea, but as is it only works on tables with no indexes or just a clustered index, as those scenarios only have one row in the dm view.
Otherwise the sum of the row count is multiplied by the number of rows. Even a heap table with one non-clustered index would have twice the number of rows reported. (or statistically close).
More accurate would be the following: (The clustered index_id is 1, heap index_id is 0, they're mutually exclusive and tend to be the most accurate row counts if not all rows have the same.)
if @count>=(select top 1 row_count
from sys.dm_db_partition_stats
where object_id=object_id('tablename')
order by index_id) ...
Then again, what if the query is updating over a certain percentage of rows? 99%? 75%, even 50%? Is that acceptable? Needs some tweaking.
January 25, 2011 at 7:55 am
GilaMonster (1/25/2011)
luciano furlan (1/25/2011)
I fear every day, that I will forget the where clause eventually and destroy an entire table.And guess what: Sql doesn't have an Undo button!
If you're doing anything on a production server, the first command in your query window and the first you run should be BEGIN TRANSACTION. That way if you do accidentally leave out (or not select) the where clause, you can roll it back.
That's why we don't allow anyone to run untested scripts on a production server. DBA's must use a special ID, and may not just type the query in to the query window.
January 25, 2011 at 8:16 am
Luciano Furlan brings up a point I noticed in reading the article, but from a slightly different direction. Luciano sees the genuine need to occasionally update all rows; his 'WHERE 1=1' is a deliberate faux-WHERE that results in @@ROWCOUNT being equal to ALL ROWS, so his query would be thrown out as if it had no WHERE. It is a realistic scenario in the real world, based on the example DB, that in some months all customers may pay by check, or all customers will convert to direct deposit, or there is a banking glitch and all customers will pay in cash. In any of these cases, one of the requisite WHERE clauses will end up hitting all payment records, and then the updates will fail because of the trigger. I have a large database application that gathers vast amounts of weather data, and occasionally one or the other of my clean up routines will empty out a table because all of the records have managed to expire; it would be nice to add some integrity checks into the project that would actually prevent WHERE-less updates, but only when they 1) have no WHERE, or 2) have a tautological WHERE, such as 'WHERE 1=1'.
Viewing 15 posts - 16 through 30 (of 94 total)
You must be logged in to reply to this topic. Login to reply