December 11, 2019 at 1:58 pm
Hello guys , i am working with sql server 2017 about a year and there are some (many) things i am searching to find solutions but...
we have 2 sql 2017 server on an AG, one of the databases we have is about 70gb and the logfile every 10-12 days goes up to 500+ GB and i have to run the following script to shrink the log file
BACKUP LOG name TO DISK='NUL:' WITH NO_CHECKSUM, CONTINUE_AFTER_ERROR
Use name
dbcc loginfo
DBCC SHRINKFILE (name_log, EMPTYFILE);
After that the logfile goes to 15-16gb with 0 whitespace
and maybe next day the same logfile is +20gb (all whitespace)
what i notice is that the state of the following query is at the same state all the time
select log_reuse_wait_desc from sys.databases where name='name'
state 1 - LOG_BACKUP
and also i can see that with the follwing script
DECLARE @filename NVARCHAR(1000);
DECLARE @bc INT;
DECLARE @ec INT;
DECLARE @bfn VARCHAR(1000);
DECLARE @efn VARCHAR(10);
SELECT @filename = CAST(value AS NVARCHAR(1000))
FROM ::fn_trace_getinfo(DEFAULT)
WHERE traceid = 1 AND property = 2;
SET @filename = REVERSE(@filename);
SET @bc = CHARINDEX('.',@filename);
SET @ec = CHARINDEX('_',@filename)+1;
SET @efn = REVERSE(SUBSTRING(@filename,1,@bc));
SET @bfn = REVERSE(SUBSTRING(@filename,@ec,LEN(@filename)));
SET @filename = @bfn + @efn
SELECT
ftg.StartTime
,te.name AS EventName
,DB_NAME(ftg.databaseid) AS DatabaseName
,ftg.Filename
,(ftg.IntegerData*8)/1024.0 AS GrowthMB
,(ftg.duration/1000)AS DurMS
FROM ::fn_trace_gettable(@filename, DEFAULT) AS ftg
INNER JOIN sys.trace_events AS te ON ftg.EventClass = te.trace_event_id
WHERE (ftg.EventClass = 92 -- Date File Auto-grow
OR ftg.EventClass = 93) -- Log File Auto-grow
ORDER BY 1 desc
i see that in the specific database i have GrowthMB every 2 hours without any opentran going one.
the log file is set to autogrowth (with default values)
any help to clarify why i have to big log file and so much whitespace?
Many thanks!
December 11, 2019 at 3:46 pm
Have you configured transaction log backup? How frequent you are running?
Muthukkumaran Kaliyamoorthy
https://www.sqlserverblogforum.com/
December 11, 2019 at 3:48 pm
yes we are running 1 full backup every night 4 differential during the day and 8 transaction log backups
but the logs are not truncated , only when using the script above.
Thanks!!!!
December 11, 2019 at 4:23 pm
select log_reuse_wait_desc from sys.databases where name='name'
state 1 - LOG_BACKUP
This the above always shows log backup. If yes, you need to increase the t-log backup frequency, like every 15 minutes.
Review all your agent job and make sure the reindex job are in right shape.
Muthukkumaran Kaliyamoorthy
https://www.sqlserverblogforum.com/
December 11, 2019 at 4:27 pm
Thank you for your reply,
i have a job which checks for index fragmentation every day but the results are pretty low something like less than 700 pages (the fragmentation levels are between 80-95%) but the indexes are being rebuild everyday (in case of).
what exactly should i check in the reindexing process and what can i correct?
Thanks again.
December 11, 2019 at 4:31 pm
Is the growth linear, or is it all of a sudden one day?
December 11, 2019 at 4:35 pm
the growth is linear... 35-45GB of logfiles everyday
December 11, 2019 at 5:09 pm
It should not be growing every day with log backups unless you have an increasing workload. I suspect you're missing something. Perhaps your log backup are not running as you think
December 11, 2019 at 5:14 pm
well the program which relies on the sql server is gathering everyday through email alerts from 100+ ships and store them to the database, the data which are stored in the db in daily basis are max 500mb to 1GB which i dont think can grow the log file to 35+GB.
the AG could cause this problem or not?
the 95% of the logfile is whitespace
December 11, 2019 at 5:16 pm
So a couple thoughts on this.
First, STOP doing the backup log to NUL, it'll kill your log chain if you ever need to recover.
Second, yes, step up the frequency of your log backups, 8 a day is not very frequent. Largely the frequency will depend on your workloads and how they impact the log (lots of read, not a lot of write / update / delete, you can get away with less frequent log backups.)
Third, it's time to do some "right-sizing" of your log and determining what is responsible for the growth of the log. If you've got data loads running over night, your log is growing to accommodate the transactions. More frequent log backups can help some with that, depending on the loads. I'd lean towards the following steps to clear this up.
December 11, 2019 at 5:16 pm
How do you know this is all whitespace? The log is reused and should be rewritten over time. There is a circular nature to this, with SQL Server writing through the file and then back to the beginning if you have run log backups. Those allow the log to be overwritten.
Again, I can't see your system. The AG possible would prevent log being re-used if a replica was unavailable. Same for replication or mirroring. Something is wrong with your setup, or your admin backups.
December 11, 2019 at 5:32 pm
creating a separate drive for only log backups and taking transaction log backups 8 times a day would help?
BACKUP LOG 'name' to disk = 'path'
is this going to truncate the log files?
December 11, 2019 at 5:32 pm
well i might be mistaken but if i run
select * from sys.dm_db_log_space_usage;
the total log size in mbs is MUCH bigger than the used log space in bytes (this is why i assumed it is white space)
December 11, 2019 at 5:40 pm
I suggest to read transaction log management.
Muthukkumaran Kaliyamoorthy
https://www.sqlserverblogforum.com/
Viewing 15 posts - 1 through 15 (of 22 total)
You must be logged in to reply to this topic. Login to reply