It cannot be stated too often that, unless your database is operating in SIMPLE
recovery mode, it is very important that regular backups are performed on the transaction log. This will control the size of the transaction log, and ensure that, in the event of a disaster, you are able to restore your database to some point shortly before the disaster occurred. These transaction log backups will be performed alongside regular full database (data file) backups
If you are working on a test system where you don't need to restore to a previous point in time, or are happy to be able to restore just to the last full database backup, then you should operate the database in SIMPLE
mode.
Let's discuss these issues in more detail.
The Importance of Backups
Consider, for example, the situation in which a SQL Server database "crashes", perhaps due to a hardware failure, and the "live" data files (mdf and ndf files), along with the transaction log file (ldf file), are no longer accessible.
In the worst case, if no backups (copies) of these files exist elsewhere, then you will suffer 100% data loss. In order to ensure that you can recover the database and restore the data as it existed at some point before the Server crashed, or before data was lost or corrupted for other reasons, the DBA needs to make regular backups of both the data and log files.
There are three main types of backup that a DBA can perform (although only the first two apply when in SIMPLE
recovery mode):
- Full database backups – backs up all the data in the database. This is essentially making a copy of the
MDF
file(s) for a given database. - Differential database backups – makes a copy of any data that has changed since the last full backup.
- Transaction log backups – makes a copy of all the log records inserted into the transaction log since the last transaction log backup (or database checkpoint, if working in
SIMPLE
recovery mode). When a log backup is made, the log generally gets truncated so that space in the file can be reused, although some factors can delay this (see Level 8 – Help, my log is full)
Some junior DBAs and many developers, perhaps misled by the term "full", assume that a full database backup backs up "everything"; both the data and contents of the transaction log. This is not correct. Essentially, both full and differential backups only back up the data, although they do also back up enough of the transaction log to enable recovery of the backed up data, and reproduce any changes made while the backup was in progress. However, in practical terms, a full database backup does not back up the transaction log, and so does not result in truncation of the transaction log. Only a transaction log backup results in truncation of the log, and so performing log backups is the only correct way to control the size of the log files, in production systems. Some of the common but incorrect ways will be discussed in Level 8 – Help, my log is full.
File and Filegroup Backups
Large databases are sometimes organized into multiple filegroups and it's possible to perform Full and Differential backups on individual filegroups, or files within those filegroups, rather than on the whole database. This topic won't be discussed further in this Stairway.
Recovery Models
SQL Server database backup and restore operations occur within the context of the recovery model of that database. A recovery model is a database property that determines whether or not you need to (or even can) back up the transaction log and how operations are logged. There are also some differences with regard to the restore operations that are available, with regard to granular page and file restores, but we will not be covering these in this series.
In general operation, a database will be operating in either SIMPLE
or FULL
recovery mode and the most important distinctions between the two are as follows:
- SIMPLE – the transaction log is only used for database recovery and for rollback operations. It is automatically truncated during periodic checkpoints. It cannot be backed up and so cannot be used to restore the database to a state in which existed at some point in the past.
- FULL – The transactions log is not automatically truncated during periodic checkpoints and so can be backed up and used to restore data to a previous point in time, as well as for database recovery and rollback. The log file is only truncated when log backups occur.
There is also a third mode, BULK
_LOGGED
, in which certain operations that would normally generate a lot of writing to the transaction log perform less logging in order not to overwhelm the transaction log.
Operations that can be minimally logged
Examples of operations that can be minimally logged include bulk import operations (using, for example, bcp or BULK INSERT
), SELECT/INTO
operations and certain index operations, such as index rebuilds. A full list can be found here: http://msdn.microsoft.com/en-us/library/ms191244.aspx.
Generally speaking, a database running in FULL
recovery mode may be temporarily switched to BULK
_LOGGED
mode in order to run such operations with minimal logging, and then switched back to FULL
mode. Running permanently in BULK
_LOGGED
mode is not a viable way to minimize the size of transaction logs. We will discuss this in more detail in Managing the Log in BULK LOGGED
Recovery Mode.
Choosing the Right Recovery Model
The overriding criteria for choosing between operating a database in FULL
recovery mode and SIMPLE
mode is as follows: how much of the data are you willing to risk losing?
In SIMPLE
recovery mode, only full and differential backups are possible. Let's say that you rely exclusively on full backups, performing one every morning at 2AM, and the server experiences a fatal crash at 1AM one morning. In this case, you would be able to restore the full database backup taken at 2AM the previous morning, and will have lost 23 hours' worth of data.
It is possible to perform differential backups in between the full backups, to reduce the amount of data at risk of loss. All backups are I/O intensive processes, but this is especially true for full, and to a lesser extent differential, backups. They are likely to affect the performance of the database, and so should not be run during times when users are accessing the database. In practical terms, if you work in SIMPLE
recovery mode, your exposure to the risk of data loss is going to be of the order of several hours.
If a database holds business-critical data and you would prefer your exposure to data loss to be measured in minutes rather than hours, then you will need to operate the database in FULL
recovery mode. In this mode, you will need to take a full database backup, followed by a series of frequent transaction log backups, followed by another full backup, and so on.
In this case you can, in theory, restore the most recent, valid full backup (plus the most recent differential backup, if taken), followed by the chain of available log file backups, since the last full or differential backup. Then, during the recovery process, all of the actions recorded in the backed up log files will be rolled forward in order to restore the database to a point in time very close to the time of the disaster.
The question of how frequently to back up the log file will depend, again, on how much data you are prepared to lose, plus the workload on your server. In critical financial or accounting applications, where the tolerance to data loss is more or less zero, then you may be taking log backups every 15 minutes, or perhaps even more frequently. In our previous example, this would mean that you could restore the 2 AM full backup and then apply each of the log files in turn, assuming you have a complete log chain extending from the full backup you're using as the base for the database recovery, up to the one taken at 12.45AM, 15 minutes before the crash. In fact, if the current log is still accessible after the crash, allowing you to perform a tail log backup, you may be able to minimize your data loss to close to zero.
Log chains and tail log backups…
…will be discussed in detail in Level 5 – Managing the Log in FULL
Recovery Model
Of course, with FULL
recovery comes a much higher maintenance overhead, in terms of the extra effort of creating and monitoring the jobs required to run very frequent transaction log backups, the I/O resources that these backups require (albeit for short periods of time), and the disk space required to store a large number of backup files. Due consideration needs to be given to this, at a business level, before choosing the appropriate recovery mode for a given database.
Setting and Switching Recovery Models
The recovery model can be set using one of the following simple commands shown in Listing 3.1.
USE master; -- set recovery model to FULL ALTER DATABASE TestDB SET RECOVERY FULL; -- set recovery model to SIMPLE ALTER DATABASE TestDB SET RECOVERY SIMPLE; -- set recovery model to BULK_LOGGED ALTER DATABASE TestDB SET RECOVERY BULK_LOGGED;
A database will adopt the default recovery model specified by the model
database. In many cases, this will mean that the 'default' recovery model for a database is FULL
, but different editions of SQL Server may have different defaults for the model database.
Discovering the Recovery Model
In theory, we can find out which model a given database is using by executing the query shown in Listing 3.2.
SELECT name , recovery_model_desc FROM sys.databases WHERE name = 'TestDB' ; GO
However, be careful with this query, as it may not always tell the truth. For example, if we create a brand new database and then immediately run the command from Listing 3.2, it would report that the database was in FULL
recovery model. However, in fact, until a full database backup is taken the database will be operating in auto-truncate mode (i.e. SIMPLE
).
We can see this in action by creating a new database on a SQL Server 2008 instance, where the default recovery model is FULL
. We create a table with some test data, and then check the recovery model, as shown in Listing 3.3.
/* STEP 1: CREATE THE DATABASE*/USE master ; IF EXISTS ( SELECT name FROM sys.databases WHERE name = 'TestDB' ) DROP DATABASE TestDB ; CREATE DATABASE TestDB ON ( NAME = TestDB_dat, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Data\TestDB.mdf' ) LOG ON ( NAME = TestDB_log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Data\TestDB.ldf' ) ; /*STEP 2: INSERT A MILLION ROWS INTO A TABLE*/USE TestDB GO IF OBJECT_ID('dbo.LogTest', 'U') IS NOT NULL DROP TABLE dbo.LogTest ; SELECT TOP 1000000 SomeID = IDENTITY( INT,1,1 ), SomeInt = ABS(CHECKSUM(NEWID())) % 50000 + 1 , SomeLetters2 = CHAR(ABS(CHECKSUM(NEWID())) % 26 + 65) + CHAR(ABS(CHECKSUM(NEWID())) % 26 + 65) , SomeMoney = CAST(ABS(CHECKSUM(NEWID())) % 10000 / 100.0 AS MONEY) , SomeDate = CAST(RAND(CHECKSUM(NEWID())) * 3653.0 + 36524.0 AS DATETIME) , SomeHex12 = RIGHT(NEWID(), 12) INTO dbo.LogTest FROM sys.all_columns ac1 CROSS JOIN sys.all_columns ac2 ; SELECT name , recovery_model_desc FROM sys.databases WHERE name = 'TestDB' ; GO name recovery_model_desc ------------------------------------------- TestDB FULL
This indicates that we're in FULL
recovery mode, but let's now check the log space usage, force a CHECKPOINT
, and then recheck the log usage, as shown in Listing 3.4.
DBCC SQLPERF(LOGSPACE) ; -- DBCC SQLPERF reports a 110 MB log file about 90% full CHECKPOINT GO DBCC SQLPERF(LOGSPACE) ; -- DBCC SQLPERF reports a 100 MB log file about 6% full
Note that the log file is roughly the same size, but is now only 6% full; the log has been truncated and the space made available for reuse. Although the database is assigned to FULL
recovery mode, it is not actually operating in that mode until the first full database backup is taken. Interestingly, this means we could have achieved the same effect by, instead of explicitly forcing a CHECKPOINT
, running a full backup of the TestDB database. The full backup operation triggers a CHECKPOINT
and the log is truncated.
To tell for sure what recovery model is in operation, execute the query shown in Listing 3.5.
SELECT db_name(database_id) AS 'DatabaseName' , last_log_backup_lsn FROM master.sys.database_recovery_status WHERE database_id = db_id('TestDB') ; GO DatabaseName last_log_backup_lsn ----------------------------------------------- TestDB NULL
If a value of NULL
appears in the last_log_backup_lsn
column, then the database is actually in auto-truncate mode, and so will be truncated when database checkpoints occur. Having performed a full database backup, you will find that the column is populated with the LSN of the log record that recorded the backup operation, and at this point the database is truly in FULL
recovery mode. From this point on, a full database backup will have no effect on the transaction log; the only way to truncate the log will be to back up the log.
Switching Models
If you ever switch a database from FULL
or BULK
LOGGED
mode to SIMPLE
mode, this will break the log chain and you'll only be able to recover the database up to the point of the last log backup taken before you switched. Therefore, it's recommended to take that log backup immediately before switching. If you subsequently switch the database back from SIMPLE
to FULL
or BULK
LOGGED
mode, remember that the database will actually continue operating in auto-truncate mode (Listing 3.5 will display NULL
) until you perform another full backup.
If you switch from FULL
to BULK_LOGGED
mode then this will not break the log chain. However, any bulk operations that occurred while in BULK_LOGGED
mode will not be fully logged in the transaction log and so cannot be controlled on an operation-by-operation basis, in the same way that fully logged operations can. This means that recovering a database to a point in time within a transaction log that contains bulk operations is not possible. You can only recover to the end of that log file. In order to "re-enable" point-in-time restore, switch back to FULL
mode after the bulk operation is complete and immediately take a log backup.
Automating and Verifying Backups
Ad-hoc database and transaction log backups can be performed via simple T-SQL scripts, in SQL Server Management Studio. However, for production systems, the DBA will need a way to automate these backups, and verify that the backups are valid, and can be used to restore your data.
Full coverage of this topic is outside the scope of this article, but some of the available options are listed below. Due to some of the shortcomings of the SSMS Maintenance Plans, most experienced DBAs would opt to write their own scripts and then automate them.
- SSMS Maintenance Plans Wizard and Designer– two tools, built into SSMS, which allow you to configure and schedule a range of core database maintenance tasks, including full database backups and transaction log backups. The DBA can also run DBCC integrity checks, schedule jobs to remove old backup files, and so on. An excellent description of these tools, and their limitations, can be found in Brad McGhee's book, Brad's Sure Guide to SQL Server Maintenance Plans
- T-SQL Scripts – you can write custom T-SQL scripts to automate your backup tasks. A well-established and respected set of maintenance scripts is provided by Ola Hallengren. His scripts create a variety of stored procedures, each performing a specific database maintenance task, and including backups, and automated using SQL Agent jobs. Richard Waymire's Stairway to SQL Server Agent is a good source of information on this topic.
- Powershell / SMO scripting – more powerful and versatile than T-SQL scripting, but with a steeper learning curve for many DBAs, Powershell can be used to script and automate almost any maintenance task. See, for example: http://www.simple-talk.com/author/allen-white/.
- Third-party Backup tools – several third party tools exist that can automate backups, as well as verify and monitor them. Most offer backup compression and encryption as well as additional features to ease backup management, verify backups, and so on. Examples include Red Gate's SQL Backup, Quest's LiteSpeed, and so on.