May 22, 2011 at 9:52 am
Comments posted to this topic are about the item Detecting Changes to a Table
May 23, 2011 at 3:47 am
In this MSDN article there is information about what I understand is an "native" way of doing change tracking in relation to building applications for Sync Framework in SQL Server 2008, it that the CHECKSUM(), BINARY_CHECKSUM(), and CHECKSUM_AGG() functions mentioned in the article or is it a third way?
How to: Use SQL Server Change Tracking http://msdn.microsoft.com/en-us/library/cc305322.aspx%5B/url%5D
May 23, 2011 at 4:55 am
Hi jongy,
I'm afraid I am not very familiar with change tracking.
I also skim read the article you listed, but can see no mention of the CHECKSUM functions discussed in this article.
Regards,
Lawrence
May 23, 2011 at 4:59 am
Lawrence,
But do you then agree that the MSDN article outlines a third method for change tracking additional to the ones discussed in the SQL Central article or am I missunderstanding anything here?
/jongy
May 23, 2011 at 5:09 am
Agreed. I believe that the change tracking functionality is designed primarily to act at a lower level of granularity, so that individual row changes to a table can be audited, but I imagine you could also use it to provide an aggregated, summary "table level" view to judge if any changes have been performed across the whole table.
Thanks for pointing this out.
Regards,
Lawrence
May 23, 2011 at 6:29 am
What if you add a column UPDATED_ON of type datetime with default to GETDATE() ?
I suppose that it would make it work.
May 23, 2011 at 6:30 am
Hi fmendes,
That would cover inserted rows only, but not cater for updates on the row, nor row deletions.
Regards,
Lawrence.
May 23, 2011 at 7:45 am
SQL Server maintains statistics, which includes counts and timestamps, whenever table indexes are updated. This meta data can be queried from an interesting data management view called sys.dm_db_index_usage_stats. For some situations this would suit the purpose of detecting table changes.
For example:
select object_name(s.object_id) as table_name, i.name as index_name,
last_user_update, user_updates
from sys.dm_db_index_usage_stats as s
join sys.indexes i on i.object_id = s.object_id and i.index_id = s.index_id
where object_name(s.object_id) = 'InvHeader';
table_name index_name last_user_update user_updates
---------- ----------------- ----------------------- ------------
InvHeader pk_invheader 2011-05-20 15:50:07.210 3713
InvHeader uix_invheader 2011-05-19 19:15:01.370 371
There are other columns in this view that return the number of seeks, scans, etc. so it can also be levereaged to determine how often indexes or tables are being accessed.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
May 23, 2011 at 8:06 am
Hi Eric,
Many thanks for your post.
It is true that DMVs offer lots of useful information, some of which could be applied for requirements discussed in my article.
However, DMVs typically require elevated user permissions, such as VIEW SERVER STATE.
Regards,
Lawrence
May 23, 2011 at 8:13 am
Thanks! You're correct.
I should have thought of timestamp/rowversion instead of datetime.
May 23, 2011 at 10:38 am
Lawrence,
Thanks for taking the time to write this.
I however do not totally agree. While in theory you are correct,
best practise is off course to have a update datetime field and probably also a updated by
column on tables. your stored procs or triggers should always update these fields.
This should always give you a different checksum.
Again theoretically you are right, but common "best practise" reality checksum is a viable option to track table changes.
H.
May 23, 2011 at 10:48 am
I use system tables to see if the table has been updated:
SELECT @expiration_dt = [modify_date]
FROM [mydb].[sys].[tables]
WHERE [name] = 'mytable'
If I detect @expiration_dt to be newer than my stored data (which obviously is datetime'd), then I rerun my code.
May 23, 2011 at 10:49 am
Thanks HansB,
It's a very good point you raise. Of course you are correct. However, I think it's still worthwhile highlighting the shortcomings of the CHECKSUM functions to further encourage the "best practice" approach to be followed. 😉
Many thanks,
Lawrence
May 23, 2011 at 10:57 am
Hi virtualjosh,
I'd be very careful using the sys.tables.modify_date field.
In my experience, it is not always kept up to date in realtime.
For example, try the following:
CREATE TABLE test1 (i INT, vc1 VARCHAR(10))
SELECT modify_date FROM sys.tables WHERE name='test1'
INSERT test1 VALUES (1, 'row1')
SELECT modify_date FROM sys.tables WHERE name='test1'
The values returned are the same....(?)
Regards,
Lawrence
May 23, 2011 at 11:07 am
virtualjosh (5/23/2011)
I use system tables to see if the table has been updated:SELECT @expiration_dt = [modify_date]
FROM [mydb].[sys].[tables]
WHERE [name] = 'mytable'
If I detect @expiration_dt to be newer than my stored data (which obviously is datetime'd), then I rerun my code.
The modified_date column on the sys.tables or sys.objects catalog views contains the date/time the schema for an object was last altered. For example, if you add a new column. It doesn't contain the date/time of the last insert/update/delete.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
Viewing 15 posts - 1 through 15 (of 30 total)
You must be logged in to reply to this topic. Login to reply