August 27, 2012 at 5:55 am
From time to time, when an intensive log operation takes place on the principal database, we observe very long snapshot recovery times on the mirror site; this affects our users who depend on the mirror snapshots for their reporting queries.
Would reducing the number of Virtual Log Files (VLFs) on the principal-db log help speed up the mirror-snapshot recovery phase?
I would think that the less logically fragmented the log is (the fewer the VLFs), the faster any read operations would be on it, including the redo phase.
Here are a couple of links that got me thinking in this direction:
http://www.sqlskills.com/BLOGS/KIMBERLY/post/Transaction-Log-VLFs-too-many-or-too-few.aspx
http://www.sqlskills.com/blogs/kimberly/post/8-Steps-to-better-Transaction-Log-throughput.aspx
Thanks for any comments.
__________________________________________________________________________________
SQL Server 2016 Columnstore Index Enhancements - System Views for Disk-Based Tables[/url]
Persisting SQL Server Index-Usage Statistics with MERGE[/url]
Turbocharge Your Database Maintenance With Service Broker: Part 2[/url]
August 27, 2012 at 5:59 am
Let's put it this way, lots and lots of VLFs will slow the redo down.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
September 9, 2012 at 7:22 am
GilaMonster (8/27/2012)
Let's put it this way, lots and lots of VLFs will slow the redo down.
Could the number of VLFs also be a contributing factor to poor mirroring performance, such as frequent mirroring suspensions and slow log transfer/restore rates on the mirror site?
Since these operations depend on reading from the log file, I would think that they would be affected as well.
(I realize that network, disks and server resources have a lot to do with mirroring health, but I wonder if by reducing the number of VLFs I could help things a bit, since this is the only thing I control as a DBA.)
__________________________________________________________________________________
SQL Server 2016 Columnstore Index Enhancements - System Views for Disk-Based Tables[/url]
Persisting SQL Server Index-Usage Statistics with MERGE[/url]
Turbocharge Your Database Maintenance With Service Broker: Part 2[/url]
September 9, 2012 at 8:09 am
Perhaps. Depends just how many VLFs you're talking about.
You've read Kimberly Tripp's blog post on transaction log throughput?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
September 10, 2012 at 11:03 am
GilaMonster (9/9/2012)
Perhaps. Depends just how many VLFs you're talking about.You've read Kimberly Tripp's blog post on transaction log throughput?
I have:
http://www.sqlskills.com/blogs/kimberly/post/8-Steps-to-better-Transaction-Log-throughput.aspx
Running "dbcc loginfo" on the database gives me 3,155 records (VLFs).
I currently have 4 log files with a total log size of 300 GB.
Data size = 500 GB.
Here is the plan:
(1) Obtain metric on the largest used-log-space value over a period of time;
I am currently monitoring the log-used space to get an idea of this value after a month's time:
USE [DB1];
SELECT
DbName = DB_NAME()
,TotalFileSizeMB = SUM( CONVERT( DECIMAL( 12, 2 ), ROUND( a.size / 128.000, 2 ) ) )
,TotalSpaceUsedMB = SUM( CONVERT( DECIMAL( 12, 2 ), ROUND( FILEPROPERTY( a.name, 'SpaceUsed' ) / 128.000, 2 ) ) )
,TotalFreeSpaceMB = SUM( CONVERT( DECIMAL( 12, 2 ), ROUND( ( a.size - FILEPROPERTY( a.name, 'SpaceUsed' ) ) / 128.000, 2 ) ) )
FROM
sys.database_files a
WHERE
a.[type_desc] = 'LOG';
(2) Determine an outage window during which I can set the database to single-user mode;
During the outage:
(3)Set the database to single-user mode;
(4)Take a log backup:
BACKUP LOG databasename TO devicename;
(5) Delete 3 of the log files; there is no need for 4 log files;
(6) Shrink the remaining file to smallest size possible:
DBCC SHRINKFILE(transactionloglogicalfilename, TRUNCATEONLY);
(7) Alter the database to modify the transaction log file to the appropriate size (determined in step 1 above);
increase the size of the log file in increments of 8000 MB and set the autogrowth to 8000 MB:
http://www.sqlskills.com/BLOGS/KIMBERLY/post/Transaction-Log-VLFs-too-many-or-too-few.aspx
__________________________________________________________________________________
SQL Server 2016 Columnstore Index Enhancements - System Views for Disk-Based Tables[/url]
Persisting SQL Server Index-Usage Statistics with MERGE[/url]
Turbocharge Your Database Maintenance With Service Broker: Part 2[/url]
September 10, 2012 at 11:20 am
TruncateOnly is not a valid shrink option when shrinking the log. It's ignored. It's valid for data files only.
The rest sounds fine, just note that you won't necessarily be able to drop the log files after a log backup. You can only drop a log file if no portion of the active log is in it. You may need fake transactions to force the active portion into the log file you're keeping.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
September 10, 2012 at 1:08 pm
GilaMonster (9/10/2012)
TruncateOnly is not a valid shrink option when shrinking the log. It's ignored. It's valid for data files only.The rest sounds fine, just note that you won't necessarily be able to drop the log files after a log backup. You can only drop a log file if no portion of the active log is in it. You may need fake transactions to force the active portion into the log file you're keeping.
Once I switch the db to single-user mode and take a log backup, there should be no active-log portion in any of the files.
The alternative would be to switch the recovery model to SIMPLE after the log backup (step 4) and then proceed with dropping the 3 log files and shrinking the remaining file (steps 5 and 6). However, I would then have to take a full backup at the end of the procedure to re-establish the log sequence and mirroring, so I would prefer not to go that way.
__________________________________________________________________________________
SQL Server 2016 Columnstore Index Enhancements - System Views for Disk-Based Tables[/url]
Persisting SQL Server Index-Usage Statistics with MERGE[/url]
Turbocharge Your Database Maintenance With Service Broker: Part 2[/url]
September 10, 2012 at 1:32 pm
Marios Philippopoulos (9/10/2012)
Once I switch the db to single-user mode and take a log backup, there should be no active-log portion in any of the files.
There is always an active portion of the log. You can never have all VLFs inactive, there's always at least one that is currently in use.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
September 12, 2012 at 6:09 am
GilaMonster (9/10/2012)
Marios Philippopoulos (9/10/2012)
Once I switch the db to single-user mode and take a log backup, there should be no active-log portion in any of the files.There is always an active portion of the log. You can never have all VLFs inactive, there's always at least one that is currently in use.
I see; I am not sure how to manipulate things so only the first log file is in use - so I can drop the others.
Is there a command I can run to see which log file(s) is/are currently is use?
You mentioned something about fake transactions. Would you be able to elaborate?
Any links you can point me to that talk about this in some detail?
Thank you for the help and suggestions!
__________________________________________________________________________________
SQL Server 2016 Columnstore Index Enhancements - System Views for Disk-Based Tables[/url]
Persisting SQL Server Index-Usage Statistics with MERGE[/url]
Turbocharge Your Database Maintenance With Service Broker: Part 2[/url]
September 12, 2012 at 7:38 am
Marios Philippopoulos (9/12/2012)
Is there a command I can run to see which log file(s) is/are currently is use?
DBCC SQLPerf(LogSpace)
You mentioned something about fake transactions. Would you be able to elaborate?
CREATE GarbageTable (...)
INSERT INTO GarbageTable ... -- lots of rows, not 1 or 2
TRUNCATE GarbageTable
Repeat the insert and truncate until the head of the log is where you want it, interspacing log backups as needed
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 16, 2012 at 4:28 am
To find out which log files are currently in use:
DBCC LOGINFO ('DBNAME');
Output records with status=2 signify which log file is currently in use.
Multiple log files and why they're bad:
http://sqlskills.com/blogs/paul/default.aspx#p1
__________________________________________________________________________________
SQL Server 2016 Columnstore Index Enhancements - System Views for Disk-Based Tables[/url]
Persisting SQL Server Index-Usage Statistics with MERGE[/url]
Turbocharge Your Database Maintenance With Service Broker: Part 2[/url]
October 16, 2012 at 6:41 am
that is correct, did you perform the create, insert, truncate that Gail advised
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
October 16, 2012 at 8:15 am
Perry Whittle (10/16/2012)
that is correct, did you perform the create, insert, truncate that Gail advised
I will probably not do that, as it seems a bit too much voodoo to me... 🙂
I will take my chances and drop whichever log files are not busy at the time of the outage.
__________________________________________________________________________________
SQL Server 2016 Columnstore Index Enhancements - System Views for Disk-Based Tables[/url]
Persisting SQL Server Index-Usage Statistics with MERGE[/url]
Turbocharge Your Database Maintenance With Service Broker: Part 2[/url]
October 16, 2012 at 8:21 am
:rolleyes:
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 16, 2012 at 8:21 am
Marios Philippopoulos (10/16/2012)
I will probably not do that, as it seems a bit too much voodoo to me... 🙂
Whats voodoo about creating a table, inserting some records and truncating it again :w00t: 😀
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
Viewing 15 posts - 1 through 15 (of 22 total)
You must be logged in to reply to this topic. Login to reply