June 21, 2017 at 7:54 am
I have a large table that I need to copy. (insert into [destination], select, from [source])
It brakes with the following error:
Msg 9002, Level 17
The transaction log for database is full due to 'ACTIVE_TRANSACTION'
The only solution that I found so far is DBCC SHRINKFILE. Any other suggestions?
Thanks!
June 21, 2017 at 8:26 am
clicky - Wednesday, June 21, 2017 7:54 AMI have a large table that I need to copy. (insert into [destination], select, from [source])It brakes with the following error:
Msg 9002, Level 17
The transaction log for database is full due to 'ACTIVE_TRANSACTION'The only solution that I found so far is DBCC SHRINKFILE. Any other suggestions?
Thanks!
What is the recovery model for the database?
😎
June 21, 2017 at 8:44 am
Eirikur Eiriksson - Wednesday, June 21, 2017 8:26 AMclicky - Wednesday, June 21, 2017 7:54 AMI have a large table that I need to copy. (insert into [destination], select, from [source])It brakes with the following error:
Msg 9002, Level 17
The transaction log for database is full due to 'ACTIVE_TRANSACTION'The only solution that I found so far is DBCC SHRINKFILE. Any other suggestions?
Thanks!What is the recovery model for the database?
😎
SIMPLE
June 21, 2017 at 2:57 pm
Hi,
Have you tried a DBCC OPENTRAN, to identify who is active ?
Best Regards
June 21, 2017 at 3:02 pm
my rule of thumb is for any table over say, 100K rows or so, data transfers like this should be done using the Import/Export Wizard, or via SSIS, so that it chunks it out in batches, and avoids massive log growth like that.
INSERT INTO... SELECT * FROM MillionBillionRowTable will kill you in these situations.
Lowell
June 21, 2017 at 5:46 pm
If the database is already in the simple recovery model, you aren't worried about point-in-time restores, so that's good.
Will you post the DDL for the source table? Please be sure to include the clustered index definition. In this case, I don't need any data.
Granted, you're still going to have to address John's point about managing your transaction log, but I may be able to help you with the table. I've done this type of thing before and found the results pretty impressive.
June 22, 2017 at 1:11 am
clicky - Wednesday, June 21, 2017 8:44 AMEirikur Eiriksson - Wednesday, June 21, 2017 8:26 AMclicky - Wednesday, June 21, 2017 7:54 AMI have a large table that I need to copy. (insert into [destination], select, from [source])It brakes with the following error:
Msg 9002, Level 17
The transaction log for database is full due to 'ACTIVE_TRANSACTION'The only solution that I found so far is DBCC SHRINKFILE. Any other suggestions?
Thanks!What is the recovery model for the database?
😎SIMPLE
And the file growth settings for the log file?
😎
June 22, 2017 at 3:15 pm
Eirikur Eiriksson - Thursday, June 22, 2017 1:11 AMclicky - Wednesday, June 21, 2017 8:44 AMEirikur Eiriksson - Wednesday, June 21, 2017 8:26 AMclicky - Wednesday, June 21, 2017 7:54 AMI have a large table that I need to copy. (insert into [destination], select, from [source])It brakes with the following error:
Msg 9002, Level 17
The transaction log for database is full due to 'ACTIVE_TRANSACTION'The only solution that I found so far is DBCC SHRINKFILE. Any other suggestions?
Thanks!What is the recovery model for the database?
😎SIMPLE
And the file growth settings for the log file?
😎
10%
June 22, 2017 at 3:29 pm
Will you post the DDL for the source table and clustered index definition?
June 28, 2017 at 6:06 pm
No pun intended, but this is SIMPLE. Create the new target table with just the clustered index. The clustered index should be identical on both tables for the best performance without having to do any tricks. Then use a minimally logged Insert/Select and you're logfile will be safe because it will only log page allocations on the new table. INSERT INTO dbo.TargetTable WITH(TABLOCK) --This is necessary for minimal logging
(put your column list here)
SELECT put the same column list here
FROM dbo.SourceTable
OPTION (RECOMPILE) --Not always necessary but a good habit to get into for this type of thing.
;
If you have any non-clustered indexes, build them after the target table is loaded.
This method also keeps you from blowing out the MDF file during the transition of a heap to a table if you were to build the clustered index separately and will take a whole lot less time.
--Jeff Moden
Change is inevitable... Change for the better is not.
June 29, 2017 at 7:26 am
Jeff Moden - Wednesday, June 28, 2017 6:06 PMNo pun intended, but this is SIMPLE. Create the new target table with just the clustered index. The clustered index should be identical on both tables for the best performance without having to do any tricks. Then use a minimally logged Insert/Select and you're logfile will be safe because it will only log page allocations on the new table.INSERT INTO dbo.TargetTable WITH(TABLOCK) --This is necessary for minimal logging
(put your column list here)
SELECT put the same column list here
FROM dbo.SourceTable
OPTION (RECOMPILE) --Not always necessary but a good habit to get into for this type of thing.
;If you have any non-clustered indexes, build them after the target table is loaded.
This method also keeps you from blowing out the MDF file during the transition of a heap to a table if you were to build the clustered index separately and will take a whole lot less time.
That's why I was asking for the DDL and CI for the table, but your post works. I always like to put the ORDER BY in there to ensure it's sorted by the CI.
Clicky, this works very well. I've been able to take a data load that started out at 7 hours in the wrong tool, someone wrote it in SQL and got it down to 30 minutes. I then used this technique and got it down to 55 seconds with no long growth. The creation of NCIs will be logged, but not the load itself.
June 29, 2017 at 7:51 am
Ed Wagner - Thursday, June 29, 2017 7:26 AMJeff Moden - Wednesday, June 28, 2017 6:06 PMNo pun intended, but this is SIMPLE. Create the new target table with just the clustered index. The clustered index should be identical on both tables for the best performance without having to do any tricks. Then use a minimally logged Insert/Select and you're logfile will be safe because it will only log page allocations on the new table.INSERT INTO dbo.TargetTable WITH(TABLOCK) --This is necessary for minimal logging
(put your column list here)
SELECT put the same column list here
FROM dbo.SourceTable
OPTION (RECOMPILE) --Not always necessary but a good habit to get into for this type of thing.
;If you have any non-clustered indexes, build them after the target table is loaded.
This method also keeps you from blowing out the MDF file during the transition of a heap to a table if you were to build the clustered index separately and will take a whole lot less time.
That's why I was asking for the DDL and CI for the table, but your post works. I always like to put the ORDER BY in there to ensure it's sorted by the CI.
Clicky, this works very well. I've been able to take a data load that started out at 7 hours in the wrong tool, someone wrote it in SQL and got it down to 30 minutes. I then used this technique and got it down to 55 seconds with no long growth. The creation of NCIs will be logged, but not the load itself.
The creation of the NCIs will be minimally logged because the OPs database is in the SIMPLE recovery model. Same for the CI and the initial file copy. With the TABLOCK hint and having identical CIs (although the sort you mention is almost free insurance), it'll all be minimally logged and nasty fast because it won't have to write the data twice.
Also, since the OP posted this 7 days ago, I'm thinking that he's no longer listening on this problem because, by hook or crook, they likely did something to accomplish the copy a couple of days ago. It might help someone else down the road, though.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 13 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply