March 20, 2009 at 9:15 am
Hi all, I didn't mean to have a thread run this long, and move off topic a bunch.
I did learn some stuff, and very glad I did.
We have some home built sync software that was designed in 2004, and have been running it successfully against 3 software tools, and 5 different databases.
Some interesting things I have learned during the course of my process and successes have been really good stuff. One of the biggest things I have learned is, that triggers SUCK unless you are using them to perform Messaging with service broker. Then they can come in handy for minor synchronous updates.
We send data using xml, and before transmission of a dataset, it's zipped and encrypted (decrypted unzipped) in that order. (The first time I tried this I accidently mixed the order and well that just can't happen and be right).
For the first 3 CRM databases, which are very similar, we used triggers for updating,inserting and deleting records. This did ok, but I think we should have just written the requirements in the code without the triggers, because it isn't very pretty.
The last database we disigned for bi-directional sync, instead of triggers I designed my own SQO class that handled all the update, inserts, deletes etc... And yes it was really fun/hard to do based on the stuff that was involved. I can send you that base class if you like. I got the idea of SQO from developer express. This is where we decided to use computername and number to create unique records for syncing, and it seems to work very well.
And last but not least, (NON-SQL). We sync dll's, and other project files using hashvalues of those files and an xml. So the xml file is built on the client with all the filenames, and hash values. It is sent to the server where a process collects the hash value for each file and filenames on the server to be synced. If anything is different, we sync each file one at a time down to the client. If the file is like Update*.sql the client knows to run the script against the local machine. Sometimes we do this when tablechanges are done, or other changes. It just depends.
Updating code on clients is a snap, no need to worry about distribution. We just copy the code to an output folder, and it is synced down via the sync program. Very cool, and we have saved a lot of time managing our data and code in this manner.
Even though we built it here, the sync code is public domain. And not something we were going to sell because there is better stuff out there, like microsoft sync. Which I haven't looked at.
I want to appologize again to Lynn and anyone that took offense at anything I said. And had no-intention of being hurtful. And I appreciate all of you folks sticking with me during my frustration.
Thank you all very much.
March 20, 2009 at 10:52 am
J (3/20/2009)
Jeff Moden (3/19/2009)
J (3/19/2009)
And as was mentioned before, a LineNo is the normal way of forcing a specific order. Using the script already posted to increment the LineNo to be consecutive allows to do this in one, efficient fell swoop.Heh... which of the dozens of scripts on this thread do you speak of?
Ah! I lost it too in this marathon, took me some time to retrieve it. See page 26
JacekO
Posted 2 days ago @ 6:05 PM
Just
UPDATE
sequence = sequence + 1
WHERE sequence >= @Sequence
and then insert your new record.
And I lerarned it from one of your articles. And NO! I don't remember which one.:cool:
Heh... nope... I don't believe I ever wrote that... If it were for a sequence table, I'd have written something like the following (partial code)....
UPDATE NextID
SET @NextID = NextID = NextID + @Increment
WHERE TableName = @TableName
SELECT @NextID = @NextID - @Increment
... that little bit of code dropped an average of 640 deadlocks per day down to near zero.
--Jeff Moden
Change is inevitable... Change for the better is not.
March 20, 2009 at 1:06 pm
Hasn't SQL Server not got something to let you write queries in c#, similar to oracles handling of java as native?
Wouldnt that be your ideal?
March 20, 2009 at 1:50 pm
foxjazz, I hope you leave this thread with more than just the knowledge we tried to impart. I hope you have also learned that there are quite a few very knowledgeable people on SSC that can help you when it comes to learning SQL and getting the most out of your system.
The key to getting better help, besides following the advice in Jeff Moden's article that happens to be the first one linked to in my signature block below, is to be humble and open to differnet thought processes and how they are applied to various problems.
The biggest obsticle to getting the help you seek can be you. Some people can take simple word descriptions of problems and come up with a solution, for instance. Others, however, may need more detail and explaination of what is trying to be accomplished (like me). When converting code from cursor based to set based processing, it is important to see what you are doing so that a set based process can be developed. There is more information in the code than you my think. We may even ask you why you are doing some things. This is important to understand your system better and provide better assistance in return. Remember, you are the one working with the system, not us.
Thank you.
March 20, 2009 at 3:28 pm
Heh... nope... I don't believe I ever wrote that... If it were for a sequence table, I'd have written something like the following (partial code)....
UPDATE NextID
SET @NextID = NextID = NextID + @Increment
WHERE TableName = @TableName
SELECT @NextID = @NextID - @Increment
... that little bit of code dropped an average of 640 deadlocks per day down to near zero.
Hmmm this what I meant. Close but no cigar.
March 20, 2009 at 3:34 pm
-> Foxjazz:
I insisted on the WHY of things not in the mindset of an inventory manager who does not want to get off his steak. In a previous life I often saw customers who could not differentiate between a business requirement and a first cut "solution".
The WHY's you got from several people was not in the tone of "justify yourself". It was a genuine question to ask to be able to provide you with a real solution.
You can gauge the helpfulness of people who kept attempting at helping you even in spite of the abrasive tone you had started with.
March 20, 2009 at 3:47 pm
Jeff Moden (3/20/2009)
Heh... nope... I don't believe I ever wrote that... If it were for a sequence table, I'd have written something like the following (partial code)....
UPDATE NextID
SET @NextID = NextID = NextID + @Increment
WHERE TableName = @TableName
SELECT @NextID = @NextID - @Increment
... that little bit of code dropped an average of 640 deadlocks per day down to near zero.
I'd change that just a tad. The NextID would be the last used, not the next to be used value. That way you can drop the last select reducing the value of @NextID. That would shave a couple more milliseconds off the processing time.
March 20, 2009 at 3:53 pm
Lynn Pettis (3/20/2009)
Jeff Moden (3/20/2009)
Heh... nope... I don't believe I ever wrote that... If it were for a sequence table, I'd have written something like the following (partial code)....
UPDATE NextID
SET @NextID = NextID = NextID + @Increment
WHERE TableName = @TableName
SELECT @NextID = @NextID - @Increment
... that little bit of code dropped an average of 640 deadlocks per day down to near zero.
I'd change that just a tad. The NextID would be the last used, not the next to be used value. That way you can drop the last select reducing the value of @NextID. That would shave a couple more milliseconds off the processing time.
Nope... that wouldn't allow the increment to properly reserve a set of numbers.
--Jeff Moden
Change is inevitable... Change for the better is not.
March 20, 2009 at 4:05 pm
Jeff Moden (3/20/2009)[/b
UPDATE NextID
SET @NextID = NextID = NextID + @Increment
WHERE TableName = @TableName
SELECT @NextID = @NextID - @Increment
... that little bit of code dropped an average of 640 deadlocks per day down to near zero.
I found that having one table for each ID you are keeping track helps to reduce blocking/deadlocking. That way only only one ID type can get blocked. Since a table like this is usually very small, a page level lock is about the same as a table lock.
Not that I would ever recommend using this to get the next ID for anything I was designing, but when someone has already put something like this is place, it can really be a problem.
March 20, 2009 at 4:14 pm
Jeff Moden (3/20/2009)
Heh... nope... I don't believe I ever wrote that... If it were for a sequence table, I'd have written something like the following (partial code)....
UPDATE NextID
SET @NextID = NextID = NextID + @Increment
WHERE TableName = @TableName
SELECT @NextID = @NextID - @Increment
... that little bit of code dropped an average of 640 deadlocks per day down to near zero.
I'd create another "Tally" table with all possible numbers.
And then select lowest vacant number(s).
Don't think there would be any chance for a deadlock.
_____________
Code for TallyGenerator
March 20, 2009 at 5:04 pm
Jeff Moden (3/20/2009)
Lynn Pettis (3/20/2009)
Jeff Moden (3/20/2009)
Heh... nope... I don't believe I ever wrote that... If it were for a sequence table, I'd have written something like the following (partial code)....
UPDATE NextID
SET @NextID = NextID = NextID + @Increment
WHERE TableName = @TableName
SELECT @NextID = @NextID - @Increment
... that little bit of code dropped an average of 640 deadlocks per day down to near zero.
I'd change that just a tad. The NextID would be the last used, not the next to be used value. That way you can drop the last select reducing the value of @NextID. That would shave a couple more milliseconds off the processing time.
Nope... that wouldn't allow the increment to properly reserve a set of numbers.
Why?
March 20, 2009 at 5:27 pm
Michael Valentine Jones (3/20/2009)
Not that I would ever recommend using this to get the next ID for anything I was designing, but when someone has already put something like this is place, it can really be a problem.
Agreed... but lemme ask, what would you use if, for some strange reason or in the face of replication, you simply couldn't use an IDENTITY column?
--Jeff Moden
Change is inevitable... Change for the better is not.
March 20, 2009 at 5:28 pm
Lynn Pettis (3/20/2009)
Jeff Moden (3/20/2009)
Lynn Pettis (3/20/2009)
Jeff Moden (3/20/2009)
Heh... nope... I don't believe I ever wrote that... If it were for a sequence table, I'd have written something like the following (partial code)....
UPDATE NextID
SET @NextID = NextID = NextID + @Increment
WHERE TableName = @TableName
SELECT @NextID = @NextID - @Increment
... that little bit of code dropped an average of 640 deadlocks per day down to near zero.
I'd change that just a tad. The NextID would be the last used, not the next to be used value. That way you can drop the last select reducing the value of @NextID. That would shave a couple more milliseconds off the processing time.
Nope... that wouldn't allow the increment to properly reserve a set of numbers.
Why?
If you did it your way, which numbers would be reserved if you needed 100 numbers as the increment?
--Jeff Moden
Change is inevitable... Change for the better is not.
March 20, 2009 at 10:18 pm
Jeff Moden (3/20/2009)
Lynn Pettis (3/20/2009)
Jeff Moden (3/20/2009)
Lynn Pettis (3/20/2009)
Jeff Moden (3/20/2009)
Heh... nope... I don't believe I ever wrote that... If it were for a sequence table, I'd have written something like the following (partial code)....
UPDATE NextID
SET @NextID = NextID = NextID + @Increment
WHERE TableName = @TableName
SELECT @NextID = @NextID - @Increment
... that little bit of code dropped an average of 640 deadlocks per day down to near zero.
I'd change that just a tad. The NextID would be the last used, not the next to be used value. That way you can drop the last select reducing the value of @NextID. That would shave a couple more milliseconds off the processing time.
Nope... that wouldn't allow the increment to properly reserve a set of numbers.
Why?
If you did it your way, which numbers would be reserved if you needed 100 numbers as the increment?
You know, I was looking at it from an increment of 1. It may take additional thought to see how it could be made to work if you were taking a block of values to use sequentially by one in memory, but I'm sure it could be done. Perhaps I'll look into this over the next week or so if I have some time. Could make for a simple article or blog. Definitely one for the mind.
March 21, 2009 at 9:21 am
Lynn Pettis (3/20/2009)
Jeff Moden (3/20/2009)
If you did it your way, which numbers would be reserved if you needed 100 numbers as the increment?You know, I was looking at it from an increment of 1. It may take additional thought to see how it could be made to work if you were taking a block of values to use sequentially by one in memory, but I'm sure it could be done. Perhaps I'll look into this over the next week or so if I have some time. Could make for a simple article or blog. Definitely one for the mind.
Absolutely correct... consider this... let's say the NextID is sitting at the number 1000. By definition of the process, that's the "Next ID available". You want to reserve 100 numbers to use.... so the Increment is 100. Here's what happens
Starting Condition:
NextID = 1000, Increment = 100.
Post Update Condition:
NextID = NextID+Increment = 1000+100 = 1100 for the next usage.
Post Select Condition:
@NextID = NextID-Increment = 1100-100 = 1000... the original NextID value for this run.
Final Condition:
@NextID = 1000, the original value correctly being sought.
NextID = 1100 which is the Next ID available to the next person/run and makes all the numbers from 1000 to 1099 (100 numbers if you count them including the 1000) unavailable to anyone else, effectively reserving them.
Using either a temp table with an IDENTITY starting at 0 or a ROW_NUMBER() starting at 1, you can use the following formulas to insert new rows into the target table with the calculated PK (let's just call it "ID") like this...
--===== Temp table method
INSERT INTO dbo.TargetTable
(ID, ...list of columns...)
SELECT @NextID + t.RowNum AS ID
...list of columns...
FROM SomeTempTable
--===== ROW_NUMBER() method
INSERT INTO dbo.TargetTable
(ID, ...list of columns...)
SELECT @NextID + ROW_NUMBER() OVER (ORDER BY SomeColumn(s)) -1 AS ID,
...list of columns...
FROM SomeJoinedTablesOrWhatever
Because both the variable @NextID and the value in the table column NextID are both derived during the same implicit transaction, there's no chance of someone "getting in between" a Select and Update like there might be when using other methods either on the target table or the NextID table. Sure, you can prevent that, but you'd need to use an explicit 2 part declared transaction like the following traditional method...
SET XACT_ABORT ON
BEGIN TRANSACTION
SELECT @NextID = NextID
FROM dbo.NextID
WHERE TableName = @TableName
UPDATE dbo.NextID
SET NextID = NextID + Increment
WHERE TableName = @TableName
COMMIT
... and THAT exact code was what caused an average of 640 deadlocks per day with spikes to as many as 4000 per day on the system at work. Just to be complete, here's the code, again, that made all but a small handful of them (caused by other things) go away...
UPDATE dbo.NextID
SET @NextID = NextID = NextID + @Increment
WHERE TableName = @TableName
SELECT @NextID = @NextID - @Increment
It's very, very quick, allows for the reservation of more than just one NextID at a time with no additional overhead, and there's no chance of accidental duplication, ever, all while resolving the NextID or Sequence table as a source of deadlocks.
Thinking ahead, it would be really cool if Microsoft either made it so functions could update tables or they'd just create a NextVal function like what some engines have so people wouldn't have to jump through hoops when they're forced to NOT use IDENTITY as an incremental source for numbering rows (ie. sequences) in a table .
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 331 through 345 (of 465 total)
You must be logged in to reply to this topic. Login to reply