June 16, 2014 at 3:29 am
Having a strange problem this morning. The TEMPDB transaction log file keeps growing.
The database server is new and the transaction log was presized to 1 GB on installation. After installing a number of databases, the log file grew over a day to 38GB. Issuing a manual checkpoint was the only way to free some space to allow it to be shrunk back to a usable size. The usage of the file is still going up.
I am struggling to find what process is causing the log to be used so heavily. Looking at the log reuse wait desc for tempdb returns "Nothing" and tempdb itself isn't being used very much or growing in size.
Any pointers would be useful.
Thanks.
June 16, 2014 at 7:47 am
There used to be a problem associated with which settings where used for initial size and growth. I also understand that they had a couple of problems with "regression code" accidently being promoted for SP1 of 2012. I don't know if they might have included this problem in the "regression code".
What are your size and growth settings for TempDB?
--Jeff Moden
Change is inevitable... Change for the better is not.
June 16, 2014 at 8:24 am
Thanks for the reply.
I've been doing some reading since this morning to try and understand what is happening.
Initially the log was set up as being 1GB in size with autogrowth set to 128MB. When I came in the office this morning, the log was 35 GB with 99% usage. I couldn't see anything that was running on the server to cause the log to be so full. I ran a manual "Checkpoint" on the tempdb and the log emptied to 20% full. I then shrank it to 5GB and have been monitoring the log usage throughout the day.
What is strange is that despite sizing the log at 1GB on server creation, the VLFs in the log file are sized at over 2GB each. When the log was 35 GB there were 16 VLF all sized equally. I am confused by this since I would have thought the growth settings would have stopped it having such large VLF files. I am thinking due to the large VLF files in the log, transactions are hitting the 70% checkpoint threshold but cannot truncate the log as they are not inactive.
What is also confusing me is that the log is constantly being used. Running dbcc sqlperf(logspace) the percentage used is always increasing despite not much going on on the server. Running
SELECT Name, log_reuse_wait_desc
FROM sys.databases
Shows nothing occurring on the tempdb.
Is there anyway I can see what is writing to the log file?
June 16, 2014 at 8:53 am
I don't have a link to the article on the self-explosive growth but, IIRC, 1 GB initial size and 128MB growth was the recipe for the fault... especially thee 128MB growth.
I'd change that to just 100MB and bounce the service to if that fixes the problem because I can't think of anything else that would be causing this problem.
--Jeff Moden
Change is inevitable... Change for the better is not.
June 16, 2014 at 3:24 pm
I have seen the issues when very large/complex processing is going on the database servers. If the code generates lots of temporary data then tempdb will grow.
If the server is a highly utilized server then do the following…
1. Move the tempdb to different drive so that you can spread the IO.
2. Split the data files based on number of CPU’s on the servers.
3. Check stored procedures that are creating the objects. Make sure that objects are getting disposed once process is done.
4.You can run a profiler trace to look for long running queries.
5.Restart the server to clean up the temporary objects(Every time server restarts a new tempdb will get created).
Thank you,
Regards
Hema.,
June 17, 2014 at 1:47 am
Thanks. For the post above, this isn't a problem with temdb data, but the tempdb log.
Quick update. Yesterday, I monitored the log usage all day and found it did work as Microsoft designed, with the log filling up to 70% and then truncating. This was frustrating in a way as I meant I wasn't replicating what had been happening over the last few days.
However, I came into the office today and found the log had grown to 23GB overnight and was at 99% full. This time though I managed to find that an active transaction was actually running. Unfortunately, I couldn't identify what the query was being executed. Sp_Whoisactive just had the query text as "Begin Transaction". (I'll look now to see if there is a better way to get the query text from a session.) However, it does point me to the application which is causing this, and does prove that it isn't a bug or something weird happening in TempDB. I ended up killing the session and the log usage then truncated and went to 0.2% full as expected.
So, I can now investigate what in the application is causing the log to grow so big.
Thanks all for your help.
June 17, 2014 at 2:30 am
Maddave (6/17/2014)
Thanks. For the post above, this isn't a problem with temdb data, but the tempdb log.
They're inextricably linked.
However, I came into the office today and found the log had grown to 23GB overnight and was at 99% full. This time though I managed to find that an active transaction was actually running. Unfortunately, I couldn't identify what the query was being executed. Sp_Whoisactive just had the query text as "Begin Transaction". (I'll look now to see if there is a better way to get the query text from a session.)
Get the SPID from sp_who2 (or wherever) and then do DBCC INPUTBUFFER(yourspidnumbergoeshere)
June 17, 2014 at 2:37 am
Beatrix Kiddo (6/17/2014)
Maddave (6/17/2014)
Thanks. For the post above, this isn't a problem with temdb data, but the tempdb log.They're inextricably linked.
However, I came into the office today and found the log had grown to 23GB overnight and was at 99% full. This time though I managed to find that an active transaction was actually running. Unfortunately, I couldn't identify what the query was being executed. Sp_Whoisactive just had the query text as "Begin Transaction". (I'll look now to see if there is a better way to get the query text from a session.)
Get the SPID from sp_who2 (or wherever) and then do DBCC INPUTBUFFER(yourspidnumbergoeshere)
Thanks for the reply. Tempdb isn't growing though, only the log of tempdb. The usage of the tempdb data files are very low and there is lots of free space in the tempdb database.
Running DBB input buffer still only shows the query text as "begin transaction" unfortunately.
June 17, 2014 at 7:54 am
You may already be doing this, but if you set an alert to email you anytime there is an auto-growth, you may have a better shot at catching the offending process. Below is a script I use and it has saved my bacon with tempdb several times. I also set an alert for pecent of log files used. Again, you may already be doing this but wanted to share. Have faced this problem several times.
-- Using msdb prevents the need for certificate signing the
-- activation procedure to execute sp_send_dbmail across
-- databases
USE [msdb];
GO
-- Drop the notification if it exists
IF EXISTS ( SELECT *
FROM sys.server_event_notifications
WHERE name = N'CaptureAutogrowEvents' )
BEGIN
DROP EVENT NOTIFICATION CaptureAutogrowEvents ON SERVER;
END
-- Drop the route if it exists
IF EXISTS ( SELECT *
FROM sys.routes
WHERE name = N'AutogrowEventRoute' )
BEGIN
DROP ROUTE AutogrowEventRoute;
END
-- Drop the service if it exists
IF EXISTS ( SELECT *
FROM sys.services
WHERE name = N'AutogrowEventService' )
BEGIN
DROP SERVICE AutogrowEventService;
END
-- Drop the queue if it exists
IF EXISTS ( SELECT *
FROM sys.service_queues
WHERE name = N'AutogrowEventQueue' )
BEGIN
DROP QUEUE AutogrowEventQueue;
END
-- Create a service broker queue to hold the events
CREATE QUEUE [AutogrowEventQueue]
WITH STATUS=ON;
GO
-- Create a service broker service receive the events
CREATE SERVICE [AutogrowEventService]
ON QUEUE [AutogrowEventQueue] ([http://schemas.microsoft.com/SQL/Notifications/PostEventNotification]);
GO
-- Create a service broker route to the service
CREATE ROUTE [AutogrowEventRoute]
WITH SERVICE_NAME = 'AutogrowEventService',
ADDRESS = 'LOCAL';
GO
-- Create the event notification to capture the events
CREATE EVENT NOTIFICATION [CaptureAutogrowEvents]
ON SERVER
WITH FAN_IN
FOR DATA_FILE_AUTO_GROW, LOG_FILE_AUTO_GROW
TO SERVICE 'AutogrowEventService', 'current database';
GO
---------------------------------------------
-- Create the Activation Stored Procedure to Process the Queue
IF EXISTS ( SELECT *
FROM dbo.sysobjects
WHERE id = OBJECT_ID(N'[dbo].[ProcessAutogrowEvents]')
AND OBJECTPROPERTY(id, N'IsProcedure') = 1 )
DROP PROCEDURE [dbo].[ProcessAutogrowEvents];
GO
CREATE PROCEDURE [dbo].[ProcessAutogrowEvents]
WITH EXECUTE AS OWNER
AS
DECLARE @message_body XML;
DECLARE @message_sequence_number INT;
DECLARE @dialog UNIQUEIDENTIFIER;
DECLARE @email_message NVARCHAR(MAX);
WHILE ( 1 = 1 )
BEGIN
BEGIN TRANSACTION;
-- Receive the next available message FROM the queue
WAITFOR
(
RECEIVE TOP(1) -- just handle one message at a time
@message_body=CAST(message_body AS XML)
FROM dbo.AutogrowEventQueue
), TIMEOUT 1000; -- if queue empty for 1 sec, give UPDATE AND GO away
-- If we didn't get anything, bail out
IF ( @@ROWCOUNT = 0 )
BEGIN
ROLLBACK TRANSACTION;
BREAK;
END
DECLARE @EventType VARCHAR(128);
DECLARE @ServerName VARCHAR(128);
DECLARE @PostTime VARCHAR(128);
DECLARE @DatabaseName VARCHAR(128);
DECLARE @Duration VARCHAR(128);
DECLARE @GrowthPages INT;
SELECT @EventType = @message_body.value('(/EVENT_INSTANCE/EventType)[1]',
'varchar(128)') ,
@Duration = @message_body.value('(/EVENT_INSTANCE/Duration)[1]',
'varchar(128)') ,
@ServerName = @message_body.value('(/EVENT_INSTANCE/ServerName)[1]',
'varchar(128)') ,
@PostTime = CAST(@message_body.value('(/EVENT_INSTANCE/PostTime)[1]',
'datetime') AS VARCHAR) ,
@DatabaseName = @message_body.value('(/EVENT_INSTANCE/DatabaseName)[1]',
'varchar(128)') ,
@GrowthPages = @message_body.value('(/EVENT_INSTANCE/IntegerData)[1]',
'int');
-- Generate formatted email message
SELECT @email_message = 'The following autogrow event occurred:'
+ CHAR(10) + CAST('ServerName: ' AS CHAR(25))
+ @ServerName + CHAR(10) + CAST('PostTime: ' AS CHAR(25))
+ @PostTime + CHAR(10)
+ CAST('DatabaseName: ' AS CHAR(25)) + @DatabaseName
+ CHAR(10) + CAST('Duration: ' AS CHAR(25)) + @Duration
+ CHAR(10) + CAST('GrowthSize_KB: ' AS CHAR(25))
+ CAST(( @GrowthPages * 8 ) AS VARCHAR(20));
-- Send email using Database Mail
EXEC msdb.dbo.sp_send_dbmail @profile_name = 'SQL Notify', -- your defined email profile
@recipients = 'itdba@mycompany.com', -- your email
@subject = 'AutoGrow Event Notification',
@body = @email_message;
-- Commit the transaction. At any point before this, we could roll
-- back. The received message would be back on the queue AND the
-- response wouldn't be sent.
COMMIT TRANSACTION;
END
GO
---------------------------------------------
-- Alter the queue to use the activation procedure
ALTER QUEUE [AutogrowEventQueue]
WITH STATUS=ON,
ACTIVATION
(STATUS=ON,
PROCEDURE_NAME = [ProcessAutogrowEvents],
MAX_QUEUE_READERS = 1,
EXECUTE AS OWNER);
GO
June 17, 2014 at 8:06 am
Here's an article showing what I do to monitor for what is causing that growth.
http://www.sqlservercentral.com/articles/Log+growth/69476/
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
June 18, 2014 at 2:37 am
We've switched back on the offending application and can't see any issues at the minute with the log increasing.
However, I've been running a SQL trace and can see that the application seems to end every bit of work it does with a "BEGIN TRANSACTION" statement. It explictly opens a transaction, does some work, calls a commit, then finishes by executing begin transaction! There is never anymore statements issued after this is executed, so no locks exists on any objects, but then there is never a commit issued on the session either. When the application makes a new call to the database it does so on a new session.
What are the issues with this type of behaviour? I assume, eventually the server will be full with lots of connections from this app user all with an open transaction not doing anything?
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply