January 8, 2010 at 12:56 pm
Can anyone tell me in sql server 2005 when the last checkpoint was performed or where a history of checkpoints can be found?
Thanks,
Steve
January 8, 2010 at 1:37 pm
That information isn't recorded in any table or DMV. I don't even know if there's a profiler event for it.
If you're feeling brave, you can directly interrogate the transaction log, but it's not particularly easy to read.
SELECT * FROM fn_dblog(null null)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 9, 2010 at 6:15 am
The only other place that the last checkpoint is recorded is in the database boot page. Although you can get to that using DBCC DBINFO, it's not easy to use the LSN recorded (which is stored in three parts and needs converting to hex) to work out when the checkpoint occurred.
One might also look to use the CHECKPOINT_QUEUE wait type to see how long the checkpoint process has been waiting, using something like:
SELECT wait_type, wait_time
FROM sys.dm_exec_requests
WHERE command = N'CHECKPOINT'
...but the wait time isn't reset when the checkpoint runs, so that doesn't work. Also, we can't tell when exactly a particular database might have been checkpointed, just that the process ran.
Reading the transaction log using fn_dblog is not ideal, but this would work:
SELECT [Checkpoint Begin], [Checkpoint End]
FROM fn_dblog(NULL, NULL)
WHERE Operation IN (N'LOP_BEGIN_CKPT', N'LOP_END_CKPT');
It would be interesting to know why you need to know this though...
January 9, 2010 at 10:57 am
Paul White (1/9/2010)
It would be interesting to know why you need to know this though...
Agreed. It's not something I can see all that many uses in knowing. Maybe tran log reuse in Simple, but there are easier ways to track log reuse.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 9, 2010 at 11:31 am
Actually, I just remembered, there's a traceflag that, when enabled, writes information on the checkpoint phases into the error log. It's one of those that you're not really supposed to used unless directed to do so by a support professional (read someone from CSS). I would not suggest leaving it on for long periods of time, I don't know what kind of overhead there is, or if it has any side effects.
DBCC TRACEON (3502)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 9, 2010 at 8:10 pm
It seems there is a trace flag for everything :w00t:
While looking into that I also found this (2008 only I'm afraid):
Tracking checkpoints using Extended Events
Paul
January 10, 2010 at 2:17 am
Paul White (1/9/2010)
It seems there is a trace flag for everything :w00t:
Very likely. I just wish there was a list of the documented ones, rather than having to check every kb article for a mention.
Knew there was an extended event, didn't mention since this is SQL 2005.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 11, 2010 at 6:04 am
Why I need to know? I have a db in simple recovery mode and there's alot of loading running overnight.
It causes the logs to expand large enough that I want the log truncated back to its initial size.
I was looking around various sites and SQL Server 2005 Unleashed book trying to find out why there is such big log usage with simple recovery mode. I found out the cause, now for the solution.
The solution points towards running a checkpoint. An automatic checkpoint must not have occurred, so I decided to add a checkpoint command in a script we use to check how large the db data and log portions are to see if the size is getting out of hand.
I wanted to be able to verify when a checkpoint was last run and that my checkpoint ran in the script.
BTW, running a checkpoint manually did drop the log to the initial log size.
January 11, 2010 at 7:30 am
Bear in mind that in Simple a checkpoint will only truncate the log to the beginning of the oldest active log segment. Any open transactions will keep a log segment active and prevent truncation. If you're running lots of long transactions, it's common to see the log not truncate for a while.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 11, 2010 at 12:09 pm
Yes, it truncates the unused part of the log prior to the oldest active transaction.
There were no open tran at the time. That's why I found it odd that an automatic checkpoint was not done. And why I am looking for something tells me the history of checkpoints. Then I can at least try to figure why.
Longer term, I plan to convert to full backup mode and have a job or alert watch the % of log full and backup the log as needed. Have not had a chance to do this yet.
January 11, 2010 at 12:53 pm
sgambale (1/11/2010)
Yes, it truncates the unused part of the log prior to the oldest active transaction.
Not quite. Log's divided into sections. The sections can be active or inactive. Only the inactive sections will be marked reusable by the checkpoint. Any open transaction, replication, mirroring will keep a section active and prevent reuse
That's why I found it odd that an automatic checkpoint was not done.
I'm sure it was done. Why it didn't mark any portion of the log reusable is a different question
Longer term, I plan to convert to full backup mode and have a job or alert watch the % of log full and backup the log as needed. Have not had a chance to do this yet.
I suggest, if you're going for full recovery (I assume for point-in-time recoverability), schedule the log backups on a regular basis, don't try fansy stuff like backing up at % full. That just makes it hard to find all the log backups to restore and easier to lose data. Interval between log backups should be the maximum amount of data you're willing to lose.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 11, 2010 at 8:41 pm
Just to expand on Gail's comment about the log structure, if you're interested to know more (and specifically to see which virtual log files are in use) see Exploring the transaction log structure - a blog entry by Kalen Delaney. It also contains a number of links which you might also find useful in expanding your knowledge in this area.
January 12, 2010 at 6:39 am
Gail,
If I backup log with init once a day, then with noinit the rest of the time, then it's one backup log I have. So there is only one backup file. However, I am a little concerned there may be a performance issue using one file that has to be opened and appended to. About 95% of log activity will be within a few hours a day.
January 12, 2010 at 6:52 am
I would rather suggest one file per backup. That way you don't overwrite every day. Problem with your design is, say 20 minutes after you did the backup with init, someone needs to recover something deleted an hour earlier but you've overwritten the backup files.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 12, 2010 at 7:00 am
My thought is to run the daily full backup and at the end, do a backup log with init.
Then the rest of the day is with noinit.
This is a DW app, not OLTP.
Viewing 15 posts - 1 through 15 (of 22 total)
You must be logged in to reply to this topic. Login to reply