November 18, 2009 at 8:01 am
OPEN cursor
FETCH NEXT FROM cursor INTO ....
WHILE @@FETCH_STATUS = 0
BEGIN
-- Process chunk of data from a single boardcomputer while there are still BEGIN TRAN
INSERT INTO destination table
DELETE FROM source_table
IF @@ERROR <>0
BEGIN
ROLLBACK TRAN
RETURN -1
END
IF @@TRANCOUNT > 0
BEGIN
COMMIT TRAN
END
-- End process chunk of data from a single boardcomputer
END
FETCH NEXT FROM TC65LIMITS INTO @REC_INDEX,@KEY_DEVICE
November 18, 2009 at 8:04 am
Is there a question here?
Can you give the cursor definition, the full FETCH INTO and the full insert and delete?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 18, 2009 at 8:22 am
I was hoping to limit the growth of the transaction tempdb by using portions of 10000 of records and commit them in batches. The tempdb grows to 6 GB.
What am I doing wrong?
OPEN cursor
FETCH NEXT FROM cursor INTO ....
WHILE @@FETCH_STATUS = 0
BEGIN
WHILE (not done)
-- Process some records (move them)
BEGIN TRAN
INSERT INTO destination table SELECT TOP 10000 ...
DELETE FROM source_table
IF @@ERROR <>0
BEGIN
ROLLBACK TRAN
RETURN -1
END
IF @@TRANCOUNT > 0
BEGIN
COMMIT TRAN
END
-- end of chunk
END
FETCH NEXT FROM ...
November 18, 2009 at 8:44 am
That's only half of the answer. What about the other part of the answer.
GilaMonster (11/18/2009)
Can you give the cursor definition, the full FETCH INTO and the full insert and delete?
.
Any how many records are you moving in total?
Blog -- LearnSQLWithBru
Join on Facebook Page Facebook.comLearnSQLWithBru
Twitter -- BruMedishetty
November 18, 2009 at 9:08 am
I apologize for the first entry. I mistakenly hit the "post topic" button. The point is that the full code is kind of big and more complicated and will only make it less readeable.
I hoped that someone with a lot of knowledge of transaction handling had a easy simple explanation ready in an eyeblink.
It is a fair select into and a normal delete. It does what it must do and that is transfering tabledata from one database to an archive table in another database where you don't want to loose data. So the issue is that the commited transactions are not release from the tempdb and keep building up. Maybe when the connection finishes normally it will free the space eventually...
It is transferring some 30 GB of data though.
It seems I could leave out the "top 10000" loop because it does not do what I hoped it would do i.e. limit the size needed of the transactionlog to complete the batch.
Model is "simple" of all databases involved...
November 18, 2009 at 9:28 am
The problem is, without seeing the cursor definition and how the inserts and deletes relate to the cursor, it's hard to say anything useful. A simple insert and delete in a loop won't blow tempDB up. They may blow the transaction log up, but tempDB's not used for simple inserts and deletes. Maybe it's the cursor, maybe it's something that you're doing in the insert or delete. Can't tell without seeing them.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply