March 20, 2016 at 10:40 pm
Comments posted to this topic are about the item Do I Want to Design My Own Transaction Log?
March 21, 2016 at 3:06 am
Maybe you could solve the ID problem using a sequence in the master database, that you query before adding a new row on the remote databases?
https://msdn.microsoft.com/en-us/library/ff878091.aspx
Regards
Marius
March 21, 2016 at 4:57 am
This was an interesting read, something I may experiment with. I've done something similar with what I would call 'change logs' using checksums (If I was to design again, Hashbytes), but those were after the fact instead of transactional. How often are errors added to the table?
I understand your qualifiers of how it is not a true replacement of transaction log, but it accomplishes the functions needed. Thanks for writing this up!
March 21, 2016 at 6:03 am
I think much of your complications would have been solved and needing solutions such as negative primary keys if you had just built the tables using a GUID as your primary key. This is what is recommended when using replication anyway when the remote databases can be updated and you need to sync the records back to a master database.
SQL server even supports auto generating a GUID for each new record just like it does for integers. This solves your problems by allowing the remote databases insert new records that won't have conflicting primary keys with the master nor with the other remote databases.
-Greg
March 21, 2016 at 6:53 am
Overall you have a good system going on. Further, I have been contemplating using something similar for a combination of this functionality and for use as an audit log should someone want to know who made a given change. Like others there are a few things I would likely do differently.
Like others I question the use of temporary negative numbers for your ids. You could use guids as suggested. You could also add a source id which identifies the office into the mix. For that matter, you could use a combination of datetime2 and an identifier of who made the log entry for your primary key.
I also wonder why in the world you felt the need to prefix all of your column names. I know this is common, but what other thing could it possibly refer to? If you have a table called "UpdateLog" then you expect all columns relate to the log. As you follow your path you end up with ever more bazaar prefixes to your column names.
Finally, if you are using a single log to update multiple tables, I wonder about the feasibility of using either xml, or json in holding heart of the update information. But understand this is coming from a developer who sees it as a way to make a simple matter to write code for the conversion. I don't know how hard/easy it would be to deal with all that in sql.
March 21, 2016 at 7:44 am
Tim, I know you ruled out replication based on using Web edition, but do you think it was worth the effort to build something custom vs just moving to Standard? For me it's long term supportability - replication is well understood and relatively easy to hire for (and it managed identity ranges just fine!). Also, was the MS Sync Framework an option you looked at? Thanks for the article.
March 21, 2016 at 8:45 am
Marius E (3/21/2016)
Maybe you could solve the ID problem using a sequence in the master database, that you query before adding a new row on the remote databases?
Thanks, that is something I did not know about that would save having to keep the next row ID table.
Also one could have a different sequence at each location if there are not millions of rows being inserted.
However CREATE SEQUENCE was introduced in SQL 2012 and we are using 2008 R2.
March 21, 2016 at 8:53 am
clb39 (3/21/2016)
...How often are errors added to the table?
In the entire time it has filed 48 error rows, all of them "possible concurrency errors" because the "Old" value did not match what was in the table. I think it was the result of a programming bug. Although as I said the remote part was never implemented so that might have added more problems...
March 21, 2016 at 8:59 am
Gqmobile (3/21/2016)
I think much of your complications would have been solved and needing solutions such as negative primary keys if you had just built the tables using a GUID as your primary key. This is what is recommended when using replication anyway when the remote databases can be updated and you need to sync the records back to a master database.SQL server even supports auto generating a GUID for each new record just like it does for integers. This solves your problems by allowing the remote databases insert new records that won't have conflicting primary keys with the master nor with the other remote databases.
-Greg
That would be the obvious solution. I am not sure why I didn't use GUIDs, probably all the warnings about performance and space although that is not as much of a problem here..
March 21, 2016 at 9:16 am
kiwood (3/21/2016)
... You could also add a source id which identifies the office into the mix. For that matter, you could use a combination of datetime2 and an identifier of who made the log entry for your primary key.
That would work, and would also automatically document where changes/additions originate.
Maybe it's my orientation toward integer keys and not thinking "outside the box"...
kiwood (3/21/2016)
I also wonder why in the world you felt the need to prefix all of your column names. I know this is common, but what other thing could it possibly refer to? If you have a table called "UpdateLog" then you expect all columns relate to the log. As you follow your path you end up with ever more bazaar prefixes to your column names.
This is something that started with the standards used by my employer where I first worked with SQL and I have just continued. (Integer identity-assigned keys as well).
An advantage might be in joins where you know the source of the column without having to use multi-part names.
kiwood (3/21/2016)
Finally, if you are using a single log to update multiple tables, I wonder about the feasibility of using either xml, or json in holding heart of the update information. But understand this is coming from a developer who sees it as a way to make a simple matter to write code for the conversion. I don't know how hard/easy it would be to deal with all that in sql.
So the log row could have the complete updated row instead of having to have one row for each column?
March 21, 2016 at 9:32 am
Andy Warren (3/21/2016)
Tim, I know you ruled out replication based on using Web edition, but do you think it was worth the effort to build something custom vs just moving to Standard? For me it's long term supportability - replication is well understood and relatively easy to hire for (and it managed identity ranges just fine!). Also, was the MS Sync Framework an option you looked at? Thanks for the article.
I work for a non-profit with limited resources. Someone else decided on the web edition before I got here, and that is the edition we are stuck with.
As for whether it was worth the effort, maybe not, but I enjoy a challenge and tend to think of coding as the solution. Also there is not a lot of data to manage around here....
We have moved more toward using web-based forms for updates from the remote locations, but if we revive this I will definitely look into MS Sync Framework.
March 21, 2016 at 11:53 am
I would add one column with the ID of the remote office inserting the record, to overcome the negative IDs and keep it small and "sequential" (GUIDs are mostly random).
Assuming all changes would go to main office and never replicate between child DBs, that could work. Plus, you would always know from wich location came that record.
March 21, 2016 at 12:53 pm
Interesting article, Tim. Definitely not an easy thing to code. I've never worked with Web Edition, so I know my approaches would be different. For what you had to work with, it's a decent solution. Though I've never used CREATE SEQUENCE, and as you noted the edition you're on doesn't support it, I think that could have saved a lot of grief by tagging every record with a Branch ID and a sequence number, the two forming all or part of a PK.
I had to create a sneakernet replication process via 3.5" floppies for a detective unit doing a special investigation back in the '90s without any sort of a network, everyone was on laptops and needed static copies of the other detective's data. It was interesting, and it worked perfectly once I got it working. Fun times.
-----
[font="Arial"]Knowledge is of two kinds. We know a subject ourselves or we know where we can find information upon it. --Samuel Johnson[/font]
March 21, 2016 at 1:49 pm
Wayne West (3/21/2016)
I had to create a sneakernet replication process via 3.5" floppies for a detective unit doing a special investigation back in the '90s without any sort of a network, everyone was on laptops and needed static copies of the other detective's data. It was interesting, and it worked perfectly once I got it working. Fun times.
I can relate. I did a sneakernet transfer to / from a MUMPS database using Visual Basic. Also in the '90s
March 21, 2016 at 6:52 pm
You might want to check how you're licensing SQL Web edition. I do not believe the license was designed for consumer use.
Viewing 15 posts - 1 through 15 (of 21 total)
You must be logged in to reply to this topic. Login to reply