April 14, 2015 at 10:33 pm
my update query updating approx 50 million record which is causing transaction log to grow, and filling disk space and finally transaction is failing, please suggestany worl around.
April 14, 2015 at 10:51 pm
You could try updating in smaller batches and then perform a log backup in between each of the batches.
Or you could get more disk space.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
April 14, 2015 at 10:51 pm
This looks like a pretty good answer:
https://answers.laserfiche.com/questions/48435/SQL-Transaction-Log-is-taking-up-all-drive-space
Then read Gail's articles:
http://www.sqlservercentral.com/articles/64582/
http://www.sqlservercentral.com/articles/Transaction+Log/72488/
April 14, 2015 at 10:58 pm
Zeal-DBA (4/14/2015)
my update query updating approx 50 million record which is causing transaction log to grow, and filling disk space and finally transaction is failing, please suggestany worl around.
Further on Jason's suggestion, you could add another log file on another drive whilst doing the update or use a minimally logged select into query with the update to create a new table, replacing the old one.
😎
April 15, 2015 at 4:31 pm
Use batches as noted by Jason. Not only will it prevent this problem (as long as your t-log backups are set up properly), but it should be faster too.
April 16, 2015 at 2:49 pm
Zeal-DBA (4/14/2015)
my update query updating approx 50 million record which is causing transaction log to grow, and filling disk space and finally transaction is failing, please suggestany worl around.
My standard process for this at work (and my usual advice on SQL Server Central) . Something like this?
--DELETING LARGE AMOUNTS OF DATA
DECLARE @Done BIT
SET @Done = 0
WHILE @Done = 0
BEGIN
DELETE TOP (20000) -- reduce if log still growing
FROM SomeTable WHERE SomeColumn = SomeValue
IF @@ROWCOUNT = 0
SET @Done = 1
CHECKPOINT -- marks log space reusable in simple recovery
END
Of course, adjust to your schema and change DELETE statement.
April 16, 2015 at 5:55 pm
sql-lover (4/16/2015)
Zeal-DBA (4/14/2015)
my update query updating approx 50 million record which is causing transaction log to grow, and filling disk space and finally transaction is failing, please suggestany worl around.My standard process for this at work (and my usual advice on SQL Server Central) . Something like this?
--DELETING LARGE AMOUNTS OF DATA
DECLARE @Done BIT
SET @Done = 0
WHILE @Done = 0
BEGIN
DELETE TOP (20000) -- reduce if log still growing
FROM SomeTable WHERE SomeColumn = SomeValue
IF @@ROWCOUNT = 0
SET @Done = 1
CHECKPOINT -- marks log space reusable in simple recovery
END
Of course, adjust to your schema and change DELETE statement.
The last scan will take quite some time on an update if you check the rowcount for 0. If you're doing batched updates using the technique above (or deletes like the above), it's better to use "IF @@ROWCOUNT < 20000" to prevent that last long scan on the updates.
--Jeff Moden
Change is inevitable... Change for the better is not.
April 16, 2015 at 8:49 pm
Jeff Moden (4/16/2015)
sql-lover (4/16/2015)
Zeal-DBA (4/14/2015)
my update query updating approx 50 million record which is causing transaction log to grow, and filling disk space and finally transaction is failing, please suggestany worl around.My standard process for this at work (and my usual advice on SQL Server Central) . Something like this?
--DELETING LARGE AMOUNTS OF DATA
DECLARE @Done BIT
SET @Done = 0
WHILE @Done = 0
BEGIN
DELETE TOP (20000) -- reduce if log still growing
FROM SomeTable WHERE SomeColumn = SomeValue
IF @@ROWCOUNT = 0
SET @Done = 1
CHECKPOINT -- marks log space reusable in simple recovery
END
Of course, adjust to your schema and change DELETE statement.
The last scan will take quite some time on an update if you check the rowcount for 0. If you're doing batched updates using the technique above (or deletes like the above), it's better to use "IF @@ROWCOUNT < 20000" to prevent that last long scan on the updates.
Sorry, not following you. I've used this on table with million records and never had any issue. It Performs ok.
Maybe if you rewrite the code? Or elaborate more?
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply