April 28, 2016 at 8:11 pm
Apologies. I'm not going to get to this tonight or tomorrow. I'm in the middle of a server migration and I need to sanitize the code before I post it here. We'll see what the weekend brings.
For now, I can only comment to beware any trigger, generic or otherwise, that needs to make a copy of the two logical tables in the trigger.
--Jeff Moden
Change is inevitable... Change for the better is not.
April 29, 2016 at 12:38 pm
Jeff Moden (4/28/2016)
Apologies. I'm not going to get to this tonight or tomorrow. I'm in the middle of a server migration and I need to sanitize the code before I post it here. We'll see what the weekend brings.For now, I can only comment to beware any trigger, generic or otherwise, that needs to make a copy of the two logical tables in the trigger.
While the cats away, the mice will play. 😉
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
April 30, 2016 at 10:57 am
LOL...
<insert TERRIBLE advice here> ... and don't forget to ascribe to someone who actually knows what he's talking about.
Sorry, couldn't resist!
The question reminded me of some "documentation" of a database (disaster) that someone created at a place I used to work. You know you're going to have an interesting day when someone asserts something like this:
Normalization: Do we really need it?
No, it just makes the database harder to query
Glad I wasn't drinking coffee at the time... or I would have done my sinuses some serious damage.
May 2, 2016 at 10:16 am
pietlinden (4/30/2016)
LOL...<insert TERRIBLE advice here> ... and don't forget to ascribe to someone who actually knows what he's talking about.
Sorry, couldn't resist!
The question reminded me of some "documentation" of a database (disaster) that someone created at a place I used to work. You know you're going to have an interesting day when someone asserts something like this:
Normalization: Do we really need it?
No, it just makes the database harder to query
Glad I wasn't drinking coffee at the time... or I would have done my sinuses some serious damage.
I'm sure the person who "architected" that database has moved on share their wisdom with another employer.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
May 2, 2016 at 10:36 am
Yeah, I would hope so. The "documentation" was absolutely hilarious - I photocopied that page and hung it on my wall. I think I wrote "JOB SECURITY" across the top. But fixing something like that was not fun. Wonder if they understood that if the database were designed correctly, I could have done all the work in one day instead of seven months?
May 4, 2016 at 8:25 am
Its been over a week, I believe, since I created this thread. I want to thank you all for your feedback. I still don't have the requirements, as some of you have asked.
Yesterday was my big chance to champion a database solution to this problem. I do favor using triggers for capturing actions on the database and writing auditing information to some table(s). I even brought up what I think was said by Jeff about developers at a business he was/is at having written some code in the app to try and do auditing and it taking almost 4 minutes to do an update. I'm afraid I've lost the argument. We're going to use C#, probably in the front-end application, to do the auditing.
Well, this thread generated some great ideas. I hope others will find it useful.
Kindest Regards, Rod Connect with me on LinkedIn.
May 4, 2016 at 9:17 am
Rod at work (5/4/2016)
We're going to use C#, probably in the front-end application, to do the auditing.
Start studying latest advances in hardware.
They usually do not admit wrong decisions easily, so the blame will be placed on "old" hardware, and you're gonna go through a lot of upgrades.
Be prepared.
_____________
Code for TallyGenerator
May 4, 2016 at 9:54 am
Wow. 4 minutes for an update with an audit? I notice zero delay with Pop Rivett's audit trigger. The big problem with auditing through the front end app is if people manage access to the database through some other avenue then any changes will not be captured. You really need to audit at the SQL Server level for auditing to be consistent. Of course, if they don't need the audit to be consistent or reliable, and they don't mind the app increasing network traffic by adding the audit to the stream, then fine.
-----
[font="Arial"]Knowledge is of two kinds. We know a subject ourselves or we know where we can find information upon it. --Samuel Johnson[/font]
May 4, 2016 at 5:07 pm
Wayne West (5/4/2016)
Wow. 4 minutes for an update with an audit? I notice zero delay with Pop Rivett's audit trigger.
That's the problem with a lot of folks' audit trigger code. No one tests with really wide rows. The tables I had to work with have between 100 and 137 columns with many too-wide character based columns that shred the normally accepted max row width of 8,060 bytes. Any generic triggers that have to copy the INSERTED and DELETED tables to be able to compare them dynamically or whatever will not do so well with such ridiculously wide-row tables. We had the additional requirement of having to carry forward certain data because the stupid ORM pre-filtered some of the data as "duplicate data" when it actually wasn't.
To be fair, I've not tested Pop's generic trigger code (although I've made similar in the past) in such a situation so I can't say that it'll run slow in such situations. I can only speculate based on past experience when such code makes copies of the INSERTED and DELETED tables for comparisons. It would be my absolute pleasure to be proven wrong and I will, someday, get around to doing such a test because generic triggers would be the bee's knees if they could be made to perform well.
--Jeff Moden
Change is inevitable... Change for the better is not.
May 5, 2016 at 2:24 am
Many of the generic triggers that I have seen, including Pop Rivett's, create an SQL statement for each column in the table so with a wide table of say 100 columns it would do 100 separate inserts into the audit table. This cannot be great for performance as an insert of 1 row into the table would fire 100 inserts into the audit table.
Is it better to pivot the data and then perform a single insert into the audit table? An insert of 1 row into the table would fire 1 insert into the audit table of 100 rows.
Would pivoting the data for auditing work well for batch processing of 1 million rows?
Jez
May 5, 2016 at 7:40 am
Jez-448386 (5/5/2016)
Many of the generic triggers that I have seen, including Pop Rivett's, create an SQL statement for each column in the table so with a wide table of say 100 columns it would do 100 separate inserts into the audit table. This cannot be great for performance as an insert of 1 row into the table would fire 100 inserts into the audit table.Is it better to pivot the data and then perform a single insert into the audit table? An insert of 1 row into the table would fire 1 insert into the audit table of 100 rows.
Would pivoting the data for auditing work well for batch processing of 1 million rows?
Jez
Most of the one's I've seen don't do what you say because they're also smart enough to NOT audit INSERTs, only changes. Even at that, they still don't do so well because of the materialization of the INSERTED and DELETED tables that generic triggers are normally forced to do.
--Jeff Moden
Change is inevitable... Change for the better is not.
May 5, 2016 at 8:35 am
Good points. My biggest table has 66 columns (all the other active tables have a very small column count), but even when I was inserting 600 records through a paste operation, generating 40,000 audit records, I didn't notice a perceptible delay.
I've gone back and forth on the concept of auditing inserts. Obviously if you have updates audited then the first change to a record will show the changed field's original value, so it's easy to argue that it's not needed. And I have a DateAdded field on my core records, so I know that. I do not know, at the core record level, who inserted the row. (Celko would be all over my butt for using rows and records like this!) Overall, I'm in favor of auditing the insert, but I'm sure I'd change my tune in a system where you're inserting a bajillion new rows a day.
-----
[font="Arial"]Knowledge is of two kinds. We know a subject ourselves or we know where we can find information upon it. --Samuel Johnson[/font]
May 5, 2016 at 1:21 pm
Wayne West (5/5/2016)
Good points. My biggest table has 66 columns (all the other active tables have a very small column count), but even when I was inserting 600 records through a paste operation, generating 40,000 audit records, I didn't notice a perceptible delay.I've gone back and forth on the concept of auditing inserts. Obviously if you have updates audited then the first change to a record will show the changed field's original value, so it's easy to argue that it's not needed. And I have a DateAdded field on my core records, so I know that. I do not know, at the core record level, who inserted the row. (Celko would be all over my butt for using rows and records like this!) Overall, I'm in favor of auditing the insert, but I'm sure I'd change my tune in a system where you're inserting a bajillion new rows a day.
Remember that auditing inserts for WHOLE ROW auditing instantly doubles the disk storage, backup, and restore requirements across all fronts unless the audit table is stored in a separate database. If it's BY COLUMN auditing, then you instantly increase all those same requirements by a factor of 6 or 8 because of the EAV nature of BY COLUMN audit tables. Either way, such tables frequently become larger than all the rest of the tables combined.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 16 through 30 (of 59 total)
You must be logged in to reply to this topic. Login to reply