January 6, 2004 at 9:12 am
Hi All,
I just upgraded to a New DB server and also upgraded from SQL 7 to SQL 2000. On the old SQL 7 server, I used to reindex tables every Sat night - I had a script that would build all indexes and stats for tables with names starting with a-g on the first Sat of the month and h -m on the second Sat of the month and so on. So basically each table was reindexed once a month.
Now I am not able to do that. If I run the Reindexing on one of our larger tables( over 10 million records) which has clustered indexes - the trans log grows very rapidly, growing to 2GB in just a few mins. I have to kill it - backup the translog and try again and still not able to complete the reindexing. I am aware that reindexing tables with clustered indexes can cause problems with the trans log to grow and possibily even the MDF. I think the problem here is that the new DB server has a much faster processer and as a result the trans log fills up real quick. For the 60% of the tables that have nowhere near the records as some of the larger tables have the reindexing goes pretty quick - though the trans log does get pretty big.
Is there a way around this - I cannot stop logging as I have a standby DB that has transaction logs shipped to it and applied every hr. Any help or comments are appreciaterd.
January 7, 2004 at 10:17 am
Hi Vikramnat,
Consider to switch the database recovery model as 'bulk_logged', before you start rebuilding the clustered index, might be able to reduce the size of transaction log. You can switch the database recovery model back to 'full' when the index rebuilt is completed. For details, you can reference the BOL and search for "Bulk_Logged Recovery".
Thanks,
January 7, 2004 at 11:55 am
Yes, Agreed - that I could change the recovery to 'bulk logged" from "Full", which is what I have it set to know - but would'nt that cause my log shipping to break.
January 7, 2004 at 12:33 pm
Hi vikramnat,
As far as I know that, switching the recovery model between 'Full' and 'Bulk_Logged", will not affect your log shipping jobs to load the transaction log files into the database before and after. However, if a database is set to use 'Bulk_Logged' recovery mode, the Point-in-time recovery will not be supported.
Thanks,
January 7, 2004 at 3:15 pm
Thanks, Will try that.
January 8, 2004 at 2:17 pm
I've had the log growth to 1.5 times the size of the data filegroup using reindex. So I make sure I have enough disk free and do a dbcc shrinkdatabase afterwards.
January 8, 2004 at 2:25 pm
How do you rebuild the indexes? Run create index command or dbcc dbreindex? It is true the log space usage will be minimized by setting recovery model to "bulk-logged" if you rebuild indexes with create index statement. But I am not sure that if you run dbcc dbreindex.
January 8, 2004 at 2:35 pm
I use : exec sp_MSforeachtable @command1 = "dbcc dbreindex ( '?' ) "
I use the simple recovery mode (it's a data warehouse), but still find the dbcc shrinkdatabase important. I always thought the log got so big because I was reindexing every table in a transaction. But since I am able to keep enough free disk available, I have avoided reworking this job to just do a few tables at a time.
January 10, 2004 at 12:51 am
vikramnat: you are lucky.. 🙂
One of my database Log is growing to 20-24GB last few weeks. Initially I thought there must have been some process which caused this. But There is no process other than Backup, Optimising jobs. I had to take a backup of the log, truncate, shrink the log each time I run into this scenario.
I am still looking into this issue to have it resolved permenently.
.
January 11, 2004 at 4:05 pm
This is a script snagged from SQL Server MVP Andrew Kelly, it keeps the log from getting full while going through reindexing alll your tables:
SET NOCOUNT ON
DECLARE @TableName VARCHAR(100), @Counter INT
SET @Counter = 1
DECLARE curTables CURSOR STATIC LOCAL
FOR
SELECT Table_Name
FROM Information_Schema.Tables
WHERE Table_Type = 'BASE TABLE'
OPEN curTables
FETCH NEXT FROM curTables INTO @TableName
SET @TableName = RTRIM(@TableName)
WHILE @@FETCH_STATUS = 0
BEGIN
SELECT 'Reindexing ' + @TableName
DBCC DBREINDEX (@TableName)
SET @Counter = @Counter + 1
-- Backup the Log every so often so as not to fill the log
IF @Counter % 10 = 0
BEGIN
BACKUP LOG [Presents] TO [DD_Presents_Log] WITH NOINIT , NOUNLOAD ,
NAME = N'Presents Log Backup', NOSKIP , STATS = 10, NOFORMAT
END
FETCH NEXT FROM curTables INTO @TableName
END
CLOSE curTables
DEALLOCATE curTables
HTH
------------
Ray Higdon MCSE, MCDBA, CCNA
January 14, 2004 at 1:14 pm
I changed the recovery model to Bulk-Logged and ran the DBCC REINDEX in batches late last night - first excluding all the large tables that have clustered indexes. The trans log remained in check and I completed the Reindexing for all the tables. I was however not able to complete the reindexing of the clustered indexes on two of our larger tables as it took very long and I killed it. I think it might be best to re-build these indexes by dropping them and recreating them again.
Also on another note after I completed the reindexing I changed the recovery model to "FULL" and ran a transaction log backup( I have a MW job that backs up the transaction log hrly which are then shipped and applied to a standby DB on another server) . During the Reindexing I had disabled the transaction log backup. When I ran the transaction log backup after putting the DB to FUll Recovery model, I noticed that the backup was huge - 15GB. How did that happen? How did it get so big - the trans log during the reindexing only grew to a 300MB as I was keeping an eye on it while reindexing. Luckily the partition to which I backup has enough space.
January 14, 2004 at 3:50 pm
I think that a lot of the 15GB might have been from the BULK-LOGGED which creates the largest T Logs (I think this was a QOD awhile ago). After you switched back to FULL and ran the backup it probably had all the data from the prior recovery model.
I am just guessing. I don't know if when you switch recovery models if the system automatically backups/truncates the T log or not.
Good Luck
Good Hunting!
AJ Ahrens
webmaster@kritter.net
January 15, 2004 at 12:49 pm
The question I have is where did all this come from - the trans log never got bigger then 300 while I had it in the bulk logged model, I looked at the temp db and it did not appear to grow. So where did the 15GB of data come from?
January 15, 2004 at 6:17 pm
hmmm.. Didn't read this one closely enough... Just curious though, The backup was set to ONLY do the T Log and didn't copy the database as well??
Good Hunting!
AJ Ahrens
webmaster@kritter.net
January 15, 2004 at 9:19 pm
Hi AJ Ahrens,
Not sure what you mean by "didn't copy the database as well"
Viewing 15 posts - 1 through 15 (of 24 total)
You must be logged in to reply to this topic. Login to reply