November 5, 2009 at 5:30 pm
I need to create a table where number of records will be restricted only to one record. How it is possible to do with check constraint or something ?
Thanks.
November 5, 2009 at 6:24 pm
There are several ways that you can do this.
You can create an Identity column and use a check constraint against the identity column. This will work unless the record is removed using DELETE. (DELETE doesn't reset the IDENTITY value.)
You can use an INSTEAD OF trigger to count the the number of rows in the table and if there is 1, prevent the insert.
Finally you can use a stored procedure as an accessor to the table. That way you can use whatever logic you need to to prevent/allow data from reaching the table.
James Leeper
Database Administrator
WDS Global - Americas Region
November 5, 2009 at 7:59 pm
Just keep in mind that no trick will keep the DBA from accidently TRUNCATEing the table. 😉
--Jeff Moden
Change is inevitable... Change for the better is not.
November 6, 2009 at 12:11 am
Jeff Moden (11/5/2009)
Just keep in mind that no trick will keep the DBA from accidently TRUNCATEing the table. 😉
Aside from...
1. Referencing the table in a foreign key relationship
2. Creating an indexed view that references the table
3. Publishing the table for transactional or merge replication
Paul
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
November 6, 2009 at 12:26 am
SQL Guy-482125 (11/5/2009)
I need to create a table where number of records will be restricted only to one record.
Hi,
What purpose you need to keep only one row? Whether you maintain like the Master record, or to keep the latest data?
November 6, 2009 at 1:06 am
If you want to make sure no more than 1 record is created in the table, you could do this:-
CREATE TABLE OneRecordOnly(Id int ,
MyData varchar(255),
MakeItUnique int NOT NULL CHECK (MakeItUnique = 1))
GO
CREATE UNIQUE NONCLUSTERED INDEX Unq ON OneRecordOnly(MakeItUnique)
INSERT OneRecordOnly SELECT 1,'Data',1
INSERT OneRecordOnly SELECT 2,'Data',2
INSERT OneRecordOnly SELECT 3,'Data',1
But to make sure that record isn't removed, you'll also have to implement one of Paul White's sugestions.
November 6, 2009 at 1:54 am
Ok, just for fun, here's a complete implementation (I think!)
Minimal comments: half the fun is figuring out how it works 🙂
This sample allows an empty table, until the first record is added, then it can't be removed, or further rows added. Both DELETE and TRUNCATE TABLE are disallowed.
USE tempdb;
GO
CREATE TABLE
dbo.EnforceSingleRow
(
row_id INTEGER PRIMARY KEY
);
GO
INSERT dbo.EnforceSingleRow (row_id) VALUES (1);
GO
CREATE TABLE
dbo.SingleRowOnly
(
row_id INTEGER IDENTITY PRIMARY KEY FOREIGN KEY REFERENCES dbo.EnforceSingleRow,
data SQL_VARIANT NULL,
);
GO
CREATE TABLE
dbo.PreventTruncate
(
row_id INTEGER FOREIGN KEY REFERENCES dbo.SingleRowOnly
)
GO
CREATE TRIGGER SingleRowOnly_IOD ON dbo.SingleRowOnly INSTEAD OF DELETE AS BEGIN ROLLBACK END;
GO
INSERT dbo.SingleRowOnly (data) VALUES ('A'); -- Succeeds
INSERT dbo.SingleRowOnly (data) VALUES ('B'); -- Fails
TRUNCATE TABLE dbo.SingleRowOnly; -- Fails
DELETE TOP (1) dbo.SingleRowOnly; -- Fails
GO
-- Tidy up
DROP TABLE
dbo.PreventTruncate,
dbo.SingleRowOnly,
dbo.EnforceSingleRow;
GO
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
November 6, 2009 at 2:09 am
Not up to your usual bulletproof solutions Paul;-)
INSERT dbo.SingleRowOnly (data) VALUES ('A'); -- Succeeds
insert enforcesinglerow (row_id) values(2)
INSERT dbo.SingleRowOnly (data) VALUES ('B'); -- Now it works
November 6, 2009 at 2:12 am
Ian Scarlett (11/6/2009)
Not up to your usual bulletproof solutions Paul;-)INSERT dbo.SingleRowOnly (data) VALUES ('A'); -- Succeeds
insert enforcesinglerow (row_id) values(2)
INSERT dbo.SingleRowOnly (data) VALUES ('B'); -- Now it works
Yeah I know, but you just recursively apply the technique to the supporting tables! :laugh:
Congratulations on hitting 1K by the way.
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
November 6, 2009 at 2:16 am
Paul White (11/6/2009)
Congratulations on hitting 1K by the way.
Thanks Paul... I still have a long way to go to catch you up ...in both number of posts and sheer depth of SQL Server knowledge
November 6, 2009 at 3:10 am
Ian Scarlett (11/6/2009)
I still have a long way to go to catch you up ...in both number of posts and sheer depth of SQL Server knowledge
Hmm I'm not sure you're right on either count, but thanks anyway!!! :w00t:
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
November 7, 2009 at 8:21 pm
Thanks all for your replies, I even didn't expect so many.
For now, I am using the 3rd method of James Leeper, a stored procedure. But disadvantage of this method is that it is not centralized, some another application or stored procedure may update it and I will be unaware about it.
Answering question of Arun Sas, yes it maybe what you call a master record, I call it a "global variable". Because SQL Server does not have global variables, I need some persistent storage to keep a status of some process. This table should have one record with only two fields: 1- master state (A or B) and another one is datetime when it was updated.
I unticipate next question: why do I need it? I have a hibrid system, generally it is OLTP with high volume of transactions and big number of users. However every night we have large batch process that lasts for 20-30 min and it updates and inserts into a table which is frequently accessed by applications' stored procedures, which in turn result in blockings and timeouts. Batch never fails. What does fail are stored procedures that try to update records in the same table.
So the solution was to have two tables A and B. While one of them is being populated by batch process, another one is used by application, next night they flip. For this reason I needed a table that would serve as a sort of "global variable" of a current master, A or B. If any of you think that this is poor solution, I am open to hear you suggestions. The size of this table is approx. 1.2 mln records, it is very wide so only about 1,000 records can fit into one page. Collisions between application and batch processing do not happen every night, but still we can't tolerate them.
Thanks
November 8, 2009 at 8:35 am
if your using SQL 2008 you can disable lock escalation on the problematic tables and enforce rowlocking on the table, that should solve the concurrency problem
if your using 2005 you could try implementing some sort of loop to run the updates in smaller batches in an effort to avoid table locks
--
Thiago Dantas
@DantHimself
November 8, 2009 at 1:54 pm
dant12 (11/8/2009)
if you're using SQL 2008 you can disable lock escalation on the problematic tables and enforce rowlocking on the table, that should solve the concurrency problem
Not necessarily. Concurrency issues aren't magically fixed by only taking row locks - and such an approach can cause serious problems if a large number of locks are then required. Locks are expensive things, and consume considerable resources.
dant12 (11/8/2009)
if your using 2005 you could try implementing some sort of loop to run the updates in smaller batches in an effort to avoid table locks
If the task is one that can be split in this way then that is a valid approach, yes.
Another option to consider would be to enable a row-versioning isolation level in the database (RCSI or SI) assuming that the problem is writers blocking readers. There are many caveats to that, so without knowing the full details, if the current solution works, it's probably best to leave it as it is. 🙂
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
November 8, 2009 at 1:57 pm
Paul White (11/8/2009)
dant12 (11/8/2009)
if you're using SQL 2008 you can disable lock escalation on the problematic tables and enforce rowlocking on the table, that should solve the concurrency problemNot necessarily. Concurrency issues aren't magically fixed by only taking row locks - and such an approach can cause serious problems if a large number of locks are then required. Locks are expensive things, and consume considerable resources.
dant12 (11/8/2009)
if your using 2005 you could try implementing some sort of loop to run the updates in smaller batches in an effort to avoid table locksIf the task is one that can be split in this way then that is a valid approach, yes.
Another option to consider would be to enable a row-versioning isolation level in the database (RCSI or SI) assuming that the problem is writers blocking readers. There are many caveats to that, so without knowing the full details, if the current solution works, it's probably best to leave it as it is. 🙂
just shooting out options here, the only one who knows his entire environment is himself so the decision is up to him, just wanted to expose more options
--
Thiago Dantas
@DantHimself
Viewing 15 posts - 1 through 15 (of 17 total)
You must be logged in to reply to this topic. Login to reply