October 21, 2005 at 8:56 am
I run SQL Server 2000 and I have a process that runs at a certain time very early each morning. I basically have two databases, one that is for outside Symbol handheld sales transactions and a main database. The process inserts records from the limited handheld database into the main database.
My issue is that if there is any other user in the database at the time of the inserts my systemsequence numbers get out of wack. I can restrict access to WAN users, but not to LAN users via application shutdowns. Is there a way in SQL server to only allow a certain login name (such as an administrator login) on and no one else during a defined time interval? I have worked with the timing of my process and no matter how I change it there always seems to be a user that logs in at some point and messes things up!
Thanks in advance for any assistance you can offer.
October 21, 2005 at 1:39 pm
Hello,
I am going to rephrase your question. Please confirm that this is close to what you are asking, or corrct me if I misunderstand the problem. Also, please provide DDL for the source and destination table, and the insert/update you are trying to use. A small example of the expected results would be helpful as well.
Paraphrased: You have a batch insert that needs to run periodically. The batch insert moves data from a staging table, or tables, into a larger table(s). There is a sequence number in the destination table that needs to remain sequential and contiguous. Does the sequence number in the staging table and the destination table need to be the same? Are you running a series of inserts inside a loop or cursor? Or are you running a single "insert into xyz () select col1, col2, col3 from abc"?
If you are trying to force a particular ID column to have identical values, you might want to look into the SQL SET value for IDENTITY INSERT. You may also want to take a look at the locking hints in SQL server.
Hope this helps. A little more info will help us provide a better answer.
Wayne
October 24, 2005 at 7:03 am
Wayne,
Your paraphrase is correct. The number sequence that gets off is a looped number that I do. When I insert the records into the larger tables I grab this number from a table which has the "next values" for the record id's. Then after the line is inserted I increment it by one for the next insert, at the end of the script it then takes the value that is incremented and puts it in the main databases table. That way the next record inserted has the next sequential number and the table that has the next values which my application uses is also updated and will allow sequential entries.
What happens is if someone goes and attempts to make another entry while I am inserting records, the next values does not update in the main database - my insert lines are fine, but the record ids are stuck at values that have already been used - thus causing an error. If I can keep people from going in to and attempting other entries during the batched insert it would solve the problem. I have everyone who goes to the tables aware of this, but for some reason at least once a week someone attempts it.
Hopefully this clarifies. If there is a way to lock down the database to one user during that insert batch that would be great!
October 24, 2005 at 7:11 am
See DENY in the books online to prevent insert rights for a specific user.
October 24, 2005 at 10:16 am
using Deny to lock out 20->2500 users is a bit on the ludicrous side I think. He wants to grant insert rights for a specific user, and lock out all others. Kind of like shutting down unix to single user mode. I'm not familiar enough with SQL server to determine how to do this but you might consider building a temp table then doing a bulk insert.
Scott
October 24, 2005 at 12:11 pm
Susan,
Thank you for the additional info. There are a few questions that come to mind about the basic design of the system. I'll ignore those for now because it sounds like you have an immediate problem.
How many procs do inserts into the destination tables? Assuming there are only two (the batch, and one other) then there may be a simple but effective solution. I'm not sure if I can articulate it without DDL to work from. Let's try.
Call the two procs "BatchProc" and "SingleProc". BatchProc needs to have exclusive write access to the destination table. SingleProc cannot be permitted to run when BatchProc is in progress. There is a separate table that holds an ID. Lets call it "MaxID". BatchProc and SingleProc update MaxID upon completion.
In BatchProc, consider SET TRANSACTION ISOLATION LEVEL SERIALIZABLE, OR HOLDLOCK when you read the current MaxID. Two different ways to do this can be found in Books OnLine:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION
SELECT au_lname FROM authors WITH (HOLDLOCK)
... do real work here
COMMIT
Because BatchProc has the MaxID locked, SingleProc is not permitted to modify the MaxID. The next step would be to put an error trap in SinglProc to handle the case where an update of MaxID failed due to an existing lock. This all assumes that you are using some sort of looping construct, perhaps a cursor, to do the inserts into the destination table. There are better designs that avoid this sort of problem completely.
If you would like a more complete solution, please provide DDL for the source and destination table, as well as the batch proc you are currently using.
Hope this helps
Wayne
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply