September 14, 2015 at 6:54 am
I would like to have a 'counter' table which will hold the last used number and return a new number. This is my schema:
if object_id('tempdb.dbo.#Counter', 'u') is not null drop table #Counter
go
create table #Counter (
Id int not null
)
go
if exists (select * from #Counter)
update #Counter
set Id = Id + 1
output inserted.Id
else
insert into #Counter (Id)
output inserted.Id
select 1
If the table is empty it returns 1 else it returns the next number (Id + 1). But this query is not atomic (i guess...?) so it could evaluate that the #Counter table is empty and then try to insert into the table, but inbetween someone else executes the insert also. Could this query be rewritten with the merge statement so that the whole operation is atomic?
September 14, 2015 at 6:58 am
Can't remember what version of SQL it came in with - SQL2012 perhaps? - but that provides a SEQUENCE object which will do what you are describing - hopefully without the blocking aspect!
September 14, 2015 at 7:22 am
Since you are using a temp table it is highly unlikely that another process is going to be able to access this table as it would be out of scope.
To be honest this type of "roll your own" sequences are full of challenges. At the very least you could eliminate the IF statement by simply inserting a 0 to get your table started. Then there is no need to check for the existence of rows, unless this table gets truncated/deleted at some point. Have you tried using a table with an identity column instead? That would at the very least already deal with the concurrency challenges you are fighting.
_______________________________________________________________
Need help? Help us help you.
Read the article at http://www.sqlservercentral.com/articles/Best+Practices/61537/ for best practices on asking questions.
Need to split a string? Try Jeff Modens splitter http://www.sqlservercentral.com/articles/Tally+Table/72993/.
Cross Tabs and Pivots, Part 1 – Converting Rows to Columns - http://www.sqlservercentral.com/articles/T-SQL/63681/
Cross Tabs and Pivots, Part 2 - Dynamic Cross Tabs - http://www.sqlservercentral.com/articles/Crosstab/65048/
Understanding and Using APPLY (Part 1) - http://www.sqlservercentral.com/articles/APPLY/69953/
Understanding and Using APPLY (Part 2) - http://www.sqlservercentral.com/articles/APPLY/69954/
September 14, 2015 at 7:32 am
@SeanLange: in reality it is a normal table, I just used temp table in the example script, sorry about that.
My way of doing it now is like this:
create table MyCounter (
Id int identity(1, 1) not null,
constraint PK_MyCounter primary key clustered (Id)
)
go
insert into MyCounter default values
select scope_identity()
This works OK but for each query a new record is inserted into the table. That's not a problem at all, I was just curious if this can be done with a merge statement in Sql Server 2008 R2 🙂
September 14, 2015 at 8:04 am
Why do you want a table with the NextAvailableID in it - rather than just putting an IDENTITY property on the ID column in the underlying table?
We have used this type of device for two purposes:
1. TO be able to allocate a sequence of IDs - e.g. as part of a Bulk Insert where we want to know the ID numbers which will be assigned ahead of the actual INSERT - perhaps to make it easier to simultaneously add Child Records linking to the Parent ID
2. Sharing a NextAvailableID between multiple tables.
But we have always used the "Provide An ID" table approach as a last resort because of the issues with contention.
September 14, 2015 at 8:26 am
_simon_ (9/14/2015)
@SeanLange: in reality it is a normal table, I just used temp table in the example script, sorry about that.My way of doing it now is like this:
create table MyCounter (
Id int identity(1, 1) not null,
constraint PK_MyCounter primary key clustered (Id)
)
go
insert into MyCounter default values
select scope_identity()
This works OK but for each query a new record is inserted into the table. That's not a problem at all, I was just curious if this can be done with a merge statement in Sql Server 2008 R2 🙂
I sort of assumed/guessed it was a persistent table. 😉
What is wrong with this code? What really has me baffled is why you want to change code that works. And why MERGE? This is not the kind of thing MERGE was designed for. It was designed for...well...merging data from multiple sources into a single target. Sure you could force it to work with MERGE but it just isn't the right tool for the job.
_______________________________________________________________
Need help? Help us help you.
Read the article at http://www.sqlservercentral.com/articles/Best+Practices/61537/ for best practices on asking questions.
Need to split a string? Try Jeff Modens splitter http://www.sqlservercentral.com/articles/Tally+Table/72993/.
Cross Tabs and Pivots, Part 1 – Converting Rows to Columns - http://www.sqlservercentral.com/articles/T-SQL/63681/
Cross Tabs and Pivots, Part 2 - Dynamic Cross Tabs - http://www.sqlservercentral.com/articles/Crosstab/65048/
Understanding and Using APPLY (Part 1) - http://www.sqlservercentral.com/articles/APPLY/69953/
Understanding and Using APPLY (Part 2) - http://www.sqlservercentral.com/articles/APPLY/69954/
September 14, 2015 at 10:41 pm
Wow, such negative comments... As I wrote before, it was purely curiosity that made me ask the question how to convert my if exists update ... else insert ...
into a merge statement.
What is wrong with this code? What really has me baffled is why you want to change code that works. And why MERGE? This is not the kind of thing MERGE was designed for. It was designed for...well...merging data from multiple sources into a single target. Sure you could force it to work with MERGE but it just isn't the right tool for the job.
Curiosity.
Why do you want a table with the NextAvailableID in it - rather than just putting an IDENTITY property on the ID column in the underlying table?
This is really off topic but anyway - some of my rows have to be send to another system (which is not in my control) and I have to provide the key for it, that is why I need a counter table.
September 15, 2015 at 1:15 am
some of my rows have to be send to another system (which is not in my control) and I have to provide the key for it, that is why I need a counter table.
Can you share more details about the actual problem you are facing when you need to send the data to another system?
and what is your current process of doing this and what changes you want to make on this ?
what would clear a lot of air. right ?
September 15, 2015 at 1:36 am
_simon_ (9/14/2015)
This is really off topic
I don't think so 🙂 it is often helpful to understand the problem, rather than the proposed solution, because that may enable alternative, and hopefully better :), solutions to be suggested. As I have said using a table for NextAvailableID is, usually, a dog 🙁 which I guess is why folk are questioning whether that is the right solution ...
some of my rows have to be send to another system (which is not in my control) and I have to provide the key for it, that is why I need a counter table.
What is special about the ID? Is it something weird like A, B, C? or is it just a number, but the starting number ... or perhaps even "the starting number each day", or something like that, must be under your control?
For example, I would favour using IDENTITY and OFFSET columns (rather than a NextAvailableID table). When sending the data to the Remote then add IDENTITY + OFFSET to provide the ID to the remote. Bit more tricky if he remote needs A, B, C rather than 1478, 1479, 1480, ... 🙂
September 15, 2015 at 4:34 am
Thanks for your input everybody!
I should probably write my initial question something like this:
I have a totaly theoretical question regarding merge statement. It can be used for 'upsert' functionality where you insert nonexistent records or update existent records in one operation. In this scenario you have to list the columns on which you want to compare if a record already exists or not.
But what about scenario where you have only one column and want to perform an insert if the table is empty and an update if the table has one record. In both cases we need the new sequence value returned. This is an example what the merge statement should do:
if object_id('Counter', 'u') is not null drop table #Counter
go
create table Counter (
Id int not null
)
go
if exists (select * from Counter)
update Counter
set Id = Id + 1
output inserted.Id
else
insert into #Counter (Id)
output inserted.Id
select 1
As I said before, this is a theoretical question because I already have a working solution. And yes, there are workarounds but then the answer to my question is 'no, you cannot use merge statement the way you wanted' and me is a happy bunny and the answer would be marked as solution.
My working solution:
create table Counter (
Id int identity(1, 1) not null, -- in reality it is: Id int identity(500000, 1) not null,
constraint Counter primary key clustered (Id)
)
go
insert into Counter default values
select scope_identity()
Again - because this is a theoretical and unambiguous question (I think) about the merge statement - I don't see a point asking why you need it. But because I wasn't clear in my original question and because you asked it: I save fingerprints in my database and I have to send every fingerprint with a quality larger then some constant to another system. Together with the fingerprint I also send an id (it has to be a number, unique, larger than 500000 and consecutive, please don't ask why, it is outside my control). This id is also saved together with all those 'good quality' fingerprints in my database. Later when I try to find a fingerprint in this other system it returns the ids of all matches and because I have those exact ids in my database also I know whose fingerprints they are.
September 15, 2015 at 6:23 am
Thanks for sharing some details.
Few suggestions
1. It would be better you include FingerTable Foreign Key into you counter table as this will help you identify exactly which Counter belongs to which FingerPrint Id ( You might have done it, but i am saying on the example you shared)
2. Merge Statement is usually helps in case of Set base Insert/Update/Delete. whats that mean is if you are only inserting a single row in the finger table than that wouldn't be a great help for you.
but if you want to insert in batches then Merge can assist you. Check the following
USE tempdb
GO
if object_id ('YourTable') is not null
DROP TABLE YourTable
GO
Create Table YourTable
(
Id int identity(1,1) Primary key
, FingerPrintValue int --- Just for the sake of the example
, CreatedOn datetime Default(Getdate())
)
;
if object_id ('CounterTable') is not null
DROP TABLE CounterTable
GO
Create table CounterTable
(
Id int identity(500001,1)
, YourTableId int
CONSTRAINT [fkCounterTable_YourTableId] FOREIGN KEY (YourTableId) REFERENCES YourTable ([ID])
)
;
GO
insert into YourTable
(FingerPrintValue)
select 1212100111 union all
select 1212100112 union all
select 1212100113 union all
select 1212100114
---- Here Comes the Merge Statement
Merge CounterTable t
using YourTable s
on s.Id = t.YourTableId
WHEN not matched THEN
insert (YourTableId)
values (s.id)
;
select * from CounterTable
hope it helps
September 15, 2015 at 7:21 am
_simon_ (9/14/2015)
Wow, such negative comments... As I wrote before, it was purely curiosity that made me ask the question how to convert myif exists update ... else insert ...
into a merge statement.
Nothing I said was intended be negative in any way. If they were taken that way then I apologize. I have reread my comments more than once and I don't see this negativity but again, my apologies if it was interpreted as such.
What is wrong with this code? What really has me baffled is why you want to change code that works. And why MERGE? This is not the kind of thing MERGE was designed for. It was designed for...well...merging data from multiple sources into a single target. Sure you could force it to work with MERGE but it just isn't the right tool for the job.
Curiosity.
Fair enough. There is nothing wrong with seeking new methods of doing stuff.
Why do you want a table with the NextAvailableID in it - rather than just putting an IDENTITY property on the ID column in the underlying table?
This is really off topic but anyway - some of my rows have to be send to another system (which is not in my control) and I have to provide the key for it, that is why I need a counter table.
[/quote]
I don't think this is off topic at all. You described a process and I suggested an alternative method. All too often people come to the forums asking for help on "how do I do..." and the best approach is a completely different path.
If you were on 2012 I would suggest a sequence. Both a sequence and identity have already dealt with all the headaches of concurrency so you don't have to. Concurrency is one of the most difficult aspects of a roll your own counter table. From what you have posted I would recommend that your current approach is probably good enough.
_______________________________________________________________
Need help? Help us help you.
Read the article at http://www.sqlservercentral.com/articles/Best+Practices/61537/ for best practices on asking questions.
Need to split a string? Try Jeff Modens splitter http://www.sqlservercentral.com/articles/Tally+Table/72993/.
Cross Tabs and Pivots, Part 1 – Converting Rows to Columns - http://www.sqlservercentral.com/articles/T-SQL/63681/
Cross Tabs and Pivots, Part 2 - Dynamic Cross Tabs - http://www.sqlservercentral.com/articles/Crosstab/65048/
Understanding and Using APPLY (Part 1) - http://www.sqlservercentral.com/articles/APPLY/69953/
Understanding and Using APPLY (Part 2) - http://www.sqlservercentral.com/articles/APPLY/69954/
September 15, 2015 at 4:38 pm
_simon_ (9/15/2015)
I save fingerprints in my database and I have to send every fingerprint with a quality larger then some constant to another system
Do you have to send such records "immediately", or can they accumulated and sent somewhat later as a "batch"?
Together with the fingerprint I also send an id (it has to be a number, unique, larger than 500000 and consecutive
If they are Batch processed I would assign the ID number at the point of submitting a Batch. A Batch of one is fine, its just not as efficient as a larger batch would be - the larger the batch the better.
My though is to assign an ID of 0, -1, or NULL - something like that, although you might use NULL for "not required" and, say, -1 for "Required but not yet allocated"
At the point of submitting the batch (whether one or several) I would then "number" the ID columns for the records in the batch using a combination of ROW_NUMBER() OVER() and MAX(ID) to get the current maximum ID in use and then add a ROW_NUMBER() offset, for each record in the batch.
The benefit of this, as I see it, is that it avoids having to have an ID Table, which is a bottleneck which will LOCK/BLOCK inserts, and it will provide contiguous numbers (which IDENTITY will not - if an insert is rolled back then IDENTITY will not re-issue that ID number again, it is considered "used" once it has been allocated, even if not actually "saved"
Viewing 13 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply