November 13, 2008 at 12:29 pm
Hi All,
I've got a question that I can't seem to find an answer to anywhere - it leads me to believe that what I want to do can't be done in SQL Server.
While the UPDATE and COLUMNS_UPDATED functions can be used within a trigger to check which columns were updated by a query, there seems to be no way to ascertain which columns were populated in an INSERT statement when only a subset of columns were included in the query. This is because a call to either function will give the same result - it will tell you all of the table columns were updated due to the implicit NULLs added by the engine.
Does anyone know a way of doing this?
Thanks in advance,
Grant
November 13, 2008 at 1:28 pm
You could always check on the columns in the inserted table in the trigger (a special table created by SQL Server that holds copies of affected rows during insert - and update - statements) and perhaps check on which values are not NULL...
However I'm assuming that if your columns had default values and you don't specify them in the insert query then you're stuck with the same situation - unless you add checks to check on the default value of the columns...
What exactly are you planning to do by ascertaining which columns were populated by an insert statement? Knowing that might help in providing a solution.
November 14, 2008 at 7:57 am
Wow... I just tried it...
CREATE TABLE dbo.TriggerTest (ColA INT, ColB INT)
GO
CREATE TRIGGER MyTrigger
ON dbo.TriggerTest
AFTER INSERT, UPDATE
AS
SELECT COLUMNS_UPDATED()
GO
INSERT INTO dbo.TriggerTest (ColA) VALUES (1)
INSERT INTO dbo.TriggerTest (ColB) VALUES (2)
GO
I don't believe I've ever written an INSERT trigger before... only for UPDATE and DELETE... it returns the same value for COLUMNS_UPDATED in both 2k and 2k5. I know it doesn't help, but I'm pretty well disappointed in that. Guess it's good for folks that write audit triggers but, wow.
Off the top of my head, I don't see any way to check other than what Winash suggested.
--Jeff Moden
Change is inevitable... Change for the better is not.
November 14, 2008 at 8:20 am
I also checked the UPDATE() function (you never know)... the one that checks per column... same disappointment...
DROP TABLE dbo.TriggerTest
GO
CREATE TABLE dbo.TriggerTest (ColA INT, ColB INT)
GO
CREATE TRIGGER MyTrigger
ON dbo.TriggerTest
AFTER INSERT, UPDATE
AS
SELECT CASE WHEN UPDATE(ColA) THEN 'Updated' END AS ColA,
CASE WHEN UPDATE(ColB) THEN 'Updated' END AS ColB
GO
INSERT INTO dbo.TriggerTest (ColA) VALUES (1)
INSERT INTO dbo.TriggerTest (ColB) VALUES (2)
GO
Heh... the reason I've never done a "columnar" INSERT trigger is because I never audit INSERTS... they are always available in the original table and I pretty much use constraints or other methods to validate data. I have written "replication" triggers but that's for a whole row anyway and all I ever looked at on those was the PK column(s) which, of course, indicate an update. Right or wrong, that's the reason.
Again, I know it doesn't help you, but thanks for posting this problem... I had no idea.
--Jeff Moden
Change is inevitable... Change for the better is not.
November 14, 2008 at 8:43 am
on INSERT, of course Columns_Updated returns 3...ALL columns (column 1 and column2 )are affected on INSERT, even if a null gets inserted.....
but if you UPDATE, using jeff's same example,
UPDATE TriggerTest SET ColA = 4
UPDATE TriggerTest SET ColB = 5
you'll get 1 or 2 repectively for the update;
The
it's un the UPDATE of a function, where you can determine which columns have changed, and can audit accordingly.
Lowell
November 14, 2008 at 11:42 am
Thanks for all the replies folks. I'll explain what I'm trying to do - I know it's an unusual request, but there's a good reason for it.
I'm writing a pricing engine - it takes source prices (a price can comprise many components - Close, Bid, Ask etc) for securities (stocks, bonds etc) from different brokers, then runs them through a series of calculations to produce a final price. For example, we may get prices for a single bond from 10 different brokers, each quoting a different price. Each price is inserted in to a table, then a calculation is run over the prices to product an "Official Price". Also received with each price are a number of "extended attributes" which don't go through the calculation engine, they simply need to be written to the Official Price. Each time a new source price comes in, the extended attributes should be written to the Official Price, overwriting the existing values. However, here's the rub - not every source price will contain every extended attribute. And that's the problem - I can't tell within my trigger whether the NULLs in the extended attribute columns is an explicit NULL, or an implicit NULL written by the database engine where an INSERT stateemnt isn't providing a value for every field. Explicit NULLs should be propagated to the Official Price, implicit ones should not as they may overwrite actual data.
Funnily enough the behaviour of the same function in Sybase does what I want....that is, only returns TRUE where the column was inserted with an explicit value. Oh well.
What I'll end up doing is sending through a list of the columns that are being updated in each load stored proc - that way the pricing engine knows exactly which columns have been updated.
November 14, 2008 at 11:55 am
I believe I'd put a default of ' ' on the nullable column... that way, if an explicit NULL is inserted on the column, it will be NULL... if the column had no insert, it will be blank instead of NULL. That would pretty make it pretty easy...
--Jeff Moden
Change is inevitable... Change for the better is not.
November 16, 2008 at 9:53 am
Hi Jeff,
Yes, I'd considered something similar to your suggestion - it works perfectly for char data, unfortunately being a pricing engine, most of the fields are numeric. One idea was to put a value such as -9999999999 in as the default value but of course this blows out database size. Another idea would be to store the numerics as varchar then convert to numeric, that way I could still use the zero-length string, but I'm not so hot on that method. There really doesn't seem to be any straight-forward solution to this issue.....
November 16, 2008 at 10:11 am
I agree on both counts... storing numerics as varchar just doesn't sit well with me either. Neither does storing a large negative number as a default. You could store "0" as the default, but that's a bit contrary to what I use NULL's for... to me, NULL's mean "I dunno the value" and storing 0 for unknown values doesn't sit so hot with me either.
I'm afraid you're correct... doesn't seem to be a straightforward way to do this except to get MS to make it an option.
--Jeff Moden
Change is inevitable... Change for the better is not.
November 17, 2008 at 6:18 am
OK, I'm going to try the following: create every extended field as a sql_variant, so I can then assign a zero-length string to any column. Not great for space (16 bytes of meta-data per value) but I don't see an alternative.
As I've never used sql_variant before, can anyone give me a heads-up on the pros/cons of using this data type (other than the aforementioned non-optimal usage of space).
Thanks in advance,
Grant
November 17, 2008 at 9:38 am
1) Indexing, should you find it necessary to index one of the extended fields.
2) Not using the appropriate data type for each column in the table.
November 17, 2008 at 5:55 pm
And, from BOL...
"ODBC does not fully support sql_variant. Therefore, queries of sql_variant columns are returned as binary data when you use Microsoft OLE DB Provider for ODBC (MSDASQL). For example, a sql_variant column that contains the character string data 'PS2091' is returned as 0x505332303931.
"
--Jeff Moden
Change is inevitable... Change for the better is not.
November 18, 2008 at 2:13 am
Hi Lynn and Jeff,
Thanks for your valuable input, I evaluated each concern and concluded the benefits outweigh the costs:
1) Indexing a sql_variant - per BOL:
sql_variant columns can be used in indexes and unique keys, as long as the length of the data in the key columns does not exceed 900 bytes.
It will never be the case that the values in any of these columns will exceed 900 bytes, as the underlying data type is numeric(28,14)
2) ODBC - the table in question is never going to be queried by an external app - final price records will have the extended attributes cast back to numeric then propagated to the portfolio management system for use in P&L calculation.
3) Columns in the table not represented with the correct data type. I agree this is not ideal, but as the columns will only ever contain one data type each, and all values will be hard-cast in comparison or output, it's a manageable issue I think. And at the moment, I don't have any alternatives.
Additionally, I re-created by price table (27 million rows) and found that the overhead of sql_variant in terms of space was much smaller than I expected - converting 20 columns from numeric(28,14) (using vardecimal storage) to sql_variant increased my table size from 4.77 Gb to 6.1 Gb. I was expecting an increase of up to 10 Gb. Of course, I'll be performing tests to make sure that this incemental size growth holds during updates and inserts.
Thanks again for your help,
Grant
November 18, 2008 at 5:24 am
Grant Scanlen (11/18/2008)
It will never be the case that the values in any of these columns will exceed 900 bytes, as the underlying data type is numeric(28,14).Grant
I guess SQL_Variant will allow you to meet the need's of your trigger, but it sure does seem a shame to do this.
--Jeff Moden
Change is inevitable... Change for the better is not.
November 18, 2008 at 5:40 am
I agree with you Jeff....I just don't see another way of achieving what I need to do. At least not without putting a 3GL business logic layer on top of the DB, which I've studiously been trying to avoid.
Viewing 15 posts - 1 through 15 (of 17 total)
You must be logged in to reply to this topic. Login to reply