February 16, 2010 at 11:40 pm
Hi all,
I m having some confusion regarding the KIMBERLY blogs as per the details
She says in here blog
1 .In SQL Server 200X, autogrowth can create blocking...so, it's best to minimize this in general.
How does it creates blocking ….
2.In general, most transaction logs will only have 20 or 30 VLFs - even 50 could be reasonable depending on the total size of the transaction log. However, in many cases what happens is that excessive autogrowths can cause an excessive number of VLFs to be added - sometimes resulting in hundreds of VLFs. Having an excessive number of VLFs can negatively impact all transaction log related activities and you may even see degradation in performance when transaction log backups occur. To see how many VLFs you have solely look at the number of rows returned by DBCC LOGINFO. The number of rows returned equals the number of VLFs your transaction log file has. If you have more than 50, I would recommend fixing it and adjusting your autogrowth so that it doesn't occur as fequently. To get rid of all of the execessive VLFs, follow these easy steps to shrink off the fragmented chunk and add a new, clean chunk to your transaction log:
Wen I run DBCC LOGINFO I get this ingo about the LSN where is the VLF and can I understand this
FileIdFileSizeStartOffsetFSeqNoStatusParityCreateLSN
225395281921670640
22621442621441700640
2262144524288168012873000000046500000
226214478643216606475000000001600000
2262144104857616906478000000049300000
2262144131072017306480000000011500000
2262144157286417106480000000048500000
2262144183500817206481000000049600000
22621442097152165012883000000011000000
February 16, 2010 at 11:53 pm
When the log file needs to grow, it will block transactions from completing until the log file has adequate space. The reason it will grow is due to inadequate space in the log file to complete your current transaction.
If you look further in Kimberly's article she has a script that will show how many VLF's are in your log file. VLF's are per log file (which should be per database). In that article, she also discusses the standard number of VLF's per growth, and an efficient level for VLF's for certain log sizes. (For instance only 8 in an 8GB log file could be too few). Check the links for Kimberly in my sig.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
February 16, 2010 at 11:58 pm
step 1 Use the dbcc sqlperf command before viewing the VLF's , when you run this command on master database it will list growth percentage of logs files for all databases in your instance.
step 2 Here look for database with logs growth > 80% and then check VLF's using the dbcc loginfo for status.
step 3 Look the status column for lot of repetitious number two 2 , if they are more then you go ahead and try shrinking of log file.
Above steps are operational procedure done on VLDB's recommended by my Guru's and have followed it without any harm till today.
Cheer Satish 🙂
February 17, 2010 at 12:22 am
TECHBABU (2/16/2010)
step 3 Look the status column for lot of repetitious number two 2 , if they are more then you go ahead and try shrinking of log file.
A very important piece is missed in these steps. If you only shrink the log file, it will grow again and continue to fragment your log file excessively. You should only shrink it during an outage window. After shrinking the logfile, you need to grow it back out to an appropriate size, using appropriate growth chunks. This will keep your log file performing nicely and prevent future fragmentation.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply