November 23, 2010 at 7:35 am
Also I agree with you when you say that the best way to protect yourself against malevolent scripts is to understand what the script is doing before you run it. It takes time to figure out a script like this, and personally i would do it only if i had the time and interest and if it would really help me if/when i run into an issue(ok, issues i can predict). (I can think of atleast a 100 situations off hand that would have better use of my learning time).
November 23, 2010 at 8:24 am
Fat Wallet
From the perspective of a professional DBA and Dev, this is useless. First, it is troublesome. Second, it cannot guarantee pulling data correctly. Third, it does not give enough information for users.
Usually, we need a tool to check online logs or backup logs to find changes made by users for specific tables. Currently, ApexSQL log is the right tool for it. It is around $1300. It provides detailed information for all insert, delete and update operations including schema changes.
I would hope that you climb down off your perch to the level of many other Preofessional DBA's who do see a use for this. As for Apex's system, I believe that you would see that it is not truely a log reader as it places triggers on tables to trap transactions and saves them to a database as well as agents on the servers. RedGate had a system that worked reasonably well, reading the log files but they stopped support.
I would not use the information provided to try to undo a million row transaction, but it can come in handy for troubleshooting a problem or seeing of someone did something. But to make a global statement such as yours is rather pompous.
Steve Jimmo
Sr DBA
“If we ever forget that we are One Nation Under God, then we will be a Nation gone under." - Ronald Reagan
November 23, 2010 at 8:27 am
Seeing someone did something << Just want to clarify what MS CSS and several MVPs have said, the transaction log is not an auditing tool, it is never meant to see who did what. It is just meant to log activity to be used for recovery. There is nothing wrong at all in learning how it works in a lot of detail as much as one desires but this is typically what it should not be used for as much as possible.
November 23, 2010 at 10:37 am
a whole lot of knee jerk reactions about "but it's not supported!", but since it works on SQL 2008 ( and someone reported it works all the way thru Denali), and your server is going to be around a while, I don't see any advantage in poo-pooing the effort put in the script.
how about this typical example that gets about 10 posts a week on "how can i get my data back":
premise:
database is in FULL recovery, but noone has setup a backup strategy. No backups available anywhere.
someone updates the INVOICE table without a WHERE statement...say setting the INVOICEAMT to $100... or someone DELETES the table without a WHERE statement.
the upcoming articles on how to examine UPDATEs and DELETEs ( where this article covered INSERTS) would be the difference between being able to handle the situation and having to look for another position somewhere, as the company closes down due to data corruption, right?
now, if you can point out the recommended way to handle that specific situation without using undocumented features, and without changing the parameters of the issue (don't say "well there shoulda been backups) I'm listening.
Lowell
November 23, 2010 at 10:46 am
If there is no backup strategy in place, and the database is in FULL recovery mode, the transaction log will likely be unreadable anyhow, as it will likely be massive in size due to no backups being taken.
If an administrator is capable enough to understand this script, there is no excuse not to have appropriate backups in place.
This script also assumes the database is in FULL recovery and not BULK_LOGGED, and I believe it would need to at least have one tran backup taken in FULL recovery in order to actually *be* in FULL recovery.
November 23, 2010 at 11:02 am
Phil Brammer (11/23/2010)
If there is no backup strategy in place, and the database is in FULL recovery mode, the transaction log will likely be unreadable anyhow, as it will likely be massive in size due to no backups being taken.If an administrator is capable enough to understand this script, there is no excuse not to have appropriate backups in place.
This script also assumes the database is in FULL recovery and not BULK_LOGGED, and I believe it would need to at least have one tran backup taken in FULL recovery in order to actually *be* in FULL recovery.
naw, if you run the script from the article, you create a database on the fly and read the info from the log....no backup was created, but your reading the log anyway...because the model database is set to FULL by default.
the same situation i outlined above could occur if the existing backup strategy stopped working due to a password change, or the SQL Agent service not starting, or other situations....in that case, while the script may be a last resort, it shouldn't be excluded from someones toolbox of scripts just because it's using "unsupported features".
in your outline, if both a full and at least one tran log backup was in place, you could do a tailbackup and hopefully do a point in time restore...which is undoubtedly the better way to go.
Lowell
November 23, 2010 at 11:26 am
Well, keep it if you think you need it, that is all. Backup failures should never be taken trivially, atleast not more than one, a responsible alert system has to be in place and as a DBA the first thing anyone should do should be to check if you had the previous backup run fine, be it dependant on sql agent or any other means. Granted some people have too many servers to handle, cannot afford alerting ( or that failed also), backup tapes get lost and the offsite location catches fire...on and on. To me there are just two simple questions and nothing to do with the unsupported nature of the command.
One If I am in a place that has a high chance of such stuff happening there is a deeper problem than just learning how to use this script,Two if that is not the case is the time and effort i would spend in understanding something like this is worth the probability of the situation happening and using it. That is every DBA's call. I don't think it is worth it and lot of DBAs i know would probably feel the same.
December 3, 2010 at 8:56 am
Nicely done article.
I've spent some time on transaction log reading and found that author Kevvie Fowler's book "SQL Server Forensic Analysis" is great. He explains it in great detail. But it's not a simple task to do.
Looking into the transaction log files is not something a DBA would do everyday but it's good to have an idea on how to do it. It is mainly done when performing forensic on a SQL server.
If are looking to into the transaction logs for diagnostics reasons I believe that a proper DB server analysis tool like Quest's Spotlight for SQL server and Redgate's SQL Monitor would be better.
Thanks,
Rudy
Rudy
December 17, 2010 at 2:13 pm
fat wallet (11/22/2010)
From the perspective of a professional DBA and Dev, this is useless. First, it is troublesome. Second, it cannot guarantee pulling data correctly. Third, it does not give enough information for users.Usually, we need a tool to check online logs or backup logs to find changes made by users for specific tables. Currently, ApexSQL log is the right tool for it. It is around $1300. It provides detailed information for all insert, delete and update operations including schema changes. For example, for update operations, you will see each updated record's old and new data information together with user, time and so on. It is good enough for monitoring OLTP change and is very convenient to use.
fat wallet,
"Second, it cannot guarantee pully data correctly" this statement has me intrigued. If you know something about this then please enlighten the rest of us, we'd love to hear what you know on the subject.
Also how long have you worked for ApexSQL? Are you in their inside or outside sales department?
Just questions we'd all like the answers too.
December 18, 2010 at 11:42 pm
Lowell (11/23/2010)
a whole lot of knee jerk reactions about "but it's not supported!", but since it works on SQL 2008 ( and someone reported it works all the way thru Denali), and your server is going to be around a while, I don't see any advantage in poo-pooing the effort put in the script.how about this typical example that gets about 10 posts a week on "how can i get my data back":
premise:
database is in FULL recovery, but noone has setup a backup strategy. No backups available anywhere.
someone updates the INVOICE table without a WHERE statement...say setting the INVOICEAMT to $100... or someone DELETES the table without a WHERE statement.
the upcoming articles on how to examine UPDATEs and DELETEs ( where this article covered INSERTS) would be the difference between being able to handle the situation and having to look for another position somewhere, as the company closes down due to data corruption, right?
now, if you can point out the recommended way to handle that specific situation without using undocumented features, and without changing the parameters of the issue (don't say "well there shoulda been backups) I'm listening.
Any DBA caught in this specific situation should put his head between his legs and pucker his lips to kiss his rear end goodbye. This is one of those avoidable situations they created unemployment insurance for.
May 3, 2011 at 4:15 pm
Hi Kraig,
I find your article very intriguing and useful.
We have a replication process that at some point during a week; well, either does not insert a record or it is partial. And we would like to see what's going on.
Do you have a piece that strictly deals with reading the errors rather than what was inserted?
Or, perhaps you can put me on the right direction by means of modifying your existing code, that's as if I have understood that completely ;-), or if you happen to have that piece..
either case I appreciate your assistance as I am stuck between a rock and a hard place.
thx in advance and please keep posting the good stuff.. loving it..
Cheers,
John Esraelo
November 9, 2012 at 5:14 am
nice topic i hsd never seen,,,very rich in sence of matter
November 9, 2012 at 11:36 am
Very nice article! Very informative and well written.
Would use that trick in production? Probably not, but that doesn't subtract value from a nice piece of work.
-- Gianluca Sartori
December 24, 2012 at 12:15 pm
Nice article.
July 17, 2019 at 6:49 am
Hi Kraig, Nice artilcle. While testing, I found it working perfectly at one of my database. When I tried it at another database, which was restored from a backup file it fails. I used following command to restore the DB:
RESTORE DATABASE [Source3] FROM DISK = N'D:\SQLServerData\Backup\Source3.bak'
WITH FILE = 1,
MOVE N'Source3' TO N'D:\SQLServerData\Data\Source3Data.mdf',
MOVE N'Source3_log' TO N'D:\SQLServerData\Logs\Source2log.ldf',
NOUNLOAD, STATS = 5
After analyzing, I observe that for an insert at one primary key column table, the [RowLog Contents 0] value is different:
0x21566408B2244242924389A5AA7AAA254342924389A9AAAAAA243342924389A5AAAAAA344343924389A5AAAAAA4D56827787546578741080AA8D00BF7F1999C126C094E4C21A40858249F00ADAACFA5C6D451640F041E83F0B35D76F6661E83F0B80AA8D00BF7F1968656C6F380159010000C12725809E8258C42580FDADC6C00E6DE7FBA9F1823E409643E83F0B35D76F6661E83F0B80AA8D00BF7F19534F4D4543484152C14BC0C483E8C47DB28493E001EEA21503172209403442E83F0B35D76F6661E83F0B80AA8D00BF7F1943484152C035C081B8812CC2578080FA9E84800F9A99999999990D403442E83F0B35D76F6661E83F0B80AA8D00BF7F1944444453011B0010802B00460056809D00AD00BD80D800F300038115015C016C017C819201AD01BD8123026A027A028A829902A602B682C602F602060306080000B9FA790000008901000001000A0054686520666F6C6C6F77696E67206578616D706C652073686F777354686520666F6C6C6F77696E67206578616D706C652073686F77730000BAFA790000008901000001000C00DFFF01B004F00A500072006F007000650072007400690065007300EF000001F801F0044E0061006D006500EF000002F802110959006F007500720020006E0061006D006500F7F731434798B2F1DA49AF54DF5441229A210000BBFA790000008901000001000E0054686520666F6C6C6F77696E67206578616D706C652073686F777354686520666F6C6C6F77696E67206578616D706C652073686F77730000BCFA790000008901000001001000A701401F08D0003454657374617364617376DFFF01B004F00A500072006F007000650072007400690065007300EF000001F801F0044E0061006D006500EF000002F802110959006F007500720020006E0061006D006500F7F7E6777C50F416DE47A51D49D2D925D94F0000BDFA7900000089010000010012004368617261637465722065787072657373696F6E73204368617261637465722065787072657373696F6E732074686174100000BEFA790000008901000001001400E701401F08D000343C00500072006F0070006500720074006900650073003E003C004E0061006D0065003E0059006F007500720020006E0061006D0065003C002F004E0061006D0065003E003C002F00500072006F0070006500720074006900650073003E00DFFF01B004F00A500072006F007000650072007400690065007300EF000001F801F0044E0061006D006500EF000002F802110959006F007500720020006E0061006D006500F7F72B91E81531DC894F91ACDEF9331D9D960000BFFA7900000089010000010015006173646173646173646173647361644D4943524F53455256494345530000C0FA790000008901000001001600A701401F08D000345241424249544D51DFFF01B004F006420050003000300030003100EF000001F801110A54006500780074002000560061006C0075006500F7E3D5A8134D718D45862A67BF2218E2B1
As you can see, it doesn't start with 0x30, so @noOfCols, @nullBitMapLength, @nullByteMap, @nullBitMap, @noVarCols, @columnOffsetArray, @varColPointer aren't found and hence it fails.
Can you please suggest any workarounds?
Viewing 15 posts - 31 through 44 (of 44 total)
You must be logged in to reply to this topic. Login to reply