December 8, 2006 at 4:03 am
Hi!
In a new (mirrored) server we made some tests last night, but over night we left a process to include 100 millions records in the database (committing each 100.000), the database fail by "insufficient disk space". The DB log got about 200GB [:0]!
The test proc makes some local calculation and inserts the data into ONE table. I tried to commit and open a new transaction for every 100.000 records!
Why is the LOG so big??
P.S. (1) We are using SQLServer 2005.
P.S. (2) We have made lots of other, (but much smaller), tests before that one.
December 11, 2006 at 8:00 am
This was removed by the editor as SPAM
December 11, 2006 at 10:36 am
you can try to set the recovery mode to simple, run your job and then set it back to full
December 11, 2006 at 11:25 am
Check your recory model first...
If it is FULL/BULK_LOGGED then make sure you are running tlog backups every 30/60 minutes.
If it is simple then your transaction might have caused this because when the recovery model is simply it truncates log once it is committed or rollback.
To shrink the log...
If recovery model is full.
1. run the backup log.. after completion..
2. dbcc shrinkfile (2, size)
If recovery model is simple...
1. run checkpoint...
2. dbcc shrinkfile (2, size)
MohammedU
Microsoft SQL Server MVP
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply