October 22, 2008 at 7:52 am
Hi,
I just realise that one of my production database is growing huge amount of size in 2 days. I run my sql script to check the tables size for that database. I got the follwing message - that is the one of the massive table and biggest size has taken in database.
no of rows data space index space
33[dbo].[Employee_Shifts]1775915521151072.0023476368.00
total size percent of db dbsize
44627440.0072.45085949214361596840.00
I have 10 indexes in this table with 50% fill factor.
My questions are -
1.Why DB size is growing too much?
2. Any thing to do with fill factor? then shall I change it to 90%?
3. Full Model at the moment, shall I change it to Simple to look after the transaction log itself?
4. Any way to check why is growing to much?
Thanks.
October 22, 2008 at 8:08 am
The table size has nothing to do with the transaction log. If the transaction log is not huge, then it's not a transaction log problem. You should not consider setting a production DB to simple recovery unless the loss of an entire day's data is acceptable (depending how often full backups run)
That table has 17 million rows in. That's a lot of data.
50% fill factor is probably too low. It means half the space listed there isn't used. Run a showcontig and you'll get more info on how full the index pages are.
How much data has been added to that table recently?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 22, 2008 at 8:20 am
Hello Gila,
The table size has nothing to do with the transaction log. If the transaction log is not huge, then it's not a transaction log problem.
Transaction Log is 80GB so far, it was 40GB yesterday.
You should not consider setting a production DB to simple recovery unless the loss of an entire day's data is acceptable (depending how often full backups run)
Every night, FULL Backup by Maintenance Plan.
That table has 17 million rows in. That's a lot of data.
50% fill factor is probably too low. It means half the space listed there isn't used. Run a showcontig and you'll get more info on how full the index pages are.
- Pages Scanned................................: 2665063
- Extents Scanned..............................: 335130
- Extent Switches..............................: 381253
- Avg. Pages per Extent........................: 8.0
- Scan Density [Best Count:Actual Count].......: 87.38% [333133:381254]
- Logical Scan Fragmentation ..................: 1.12%
- Extent Scan Fragmentation ...................: 0.24%
- Avg. Bytes Free per Page.....................: 3784.9
- Avg. Page Density (full).....................: 53.24%
How much data has been added to that table recently?
350000 today.
Thanks.
October 22, 2008 at 8:31 am
Leo (10/22/2008)
Hello Gila,The table size has nothing to do with the transaction log. If the transaction log is not huge, then it's not a transaction log problem.
Transaction Log is 80GB so far, it was 40GB yesterday.
How often are you backing up the transaction log?
- Avg. Bytes Free per Page.....................: 3784.9
- Avg. Page Density (full).....................: 53.24%
The page density looks a little low. Maybe consider upping the fill factor to about 75% and watch to make sure that the index doesn't fragment too fast. That will decrease the wasted space and also reduce the amount of IOs needed to read this table.
350000 today.
Are you adding 350 000 rows every day? Are there any plans to archive rows that you don't need any longer
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 22, 2008 at 8:58 am
The page density looks a little low. Maybe consider upping the fill factor to about 75% and watch to make sure that the index doesn't fragment too fast. That will decrease the wasted space and also reduce the amount of IOs needed to read this table.
I will change it to 75%.
How often are you backing up the transaction log?
everynight.
Are you adding 350 000 rows every day? Are there any plans to archive rows that you don't need any longer
Yes, basically we can't archive, coz.....we need full month of data in one table and code is alredy done it and I can't change it. Becasue I am not involve in application software.
October 22, 2008 at 10:07 am
Leo (10/22/2008)
I will change it to 75%.
How often are you backing up the transaction log?everynight.
Separate from the full backup?
Didn't we have a discussion on the transaction log in another thread just a couple days ago? Once a day is not often enough for the tran log. Especially not with the kind of volumes of data changes you're describing here. Once an hour is probably closer to what you need. Besides, if you only back the log up once a day, you can potentially lose a full day's data if the server's drives fail.
Is that acceptable? If not, how much data loss is considered acceptable in the case of a total drive failure of the server?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 22, 2008 at 10:42 am
I agree with Gail. You really want to back up the log every hour or so. I bet if you told you boss that if we had an issue at 6pm, we'd lose all data since last night at midnight, he wouldn't be happy.
Run more frequent transaction logs. That will give you an idea of backup space and log size needed.
Be sure that your backup files are not with your data files. Bad idea if that disk fails.
October 22, 2008 at 1:59 pm
Hi,
I switched my database from Full to Simple Mode. I don't need to backup the transaction log at the moment. I shrinked the database and rebuild will run after shrink so that all the fragemented file will be rebuild again. then I will do a backup which is already setup in Maintenance plan.
I might change it back to Full Model when transaction log need to backup.
by the way, Gila........we had that discussion before the other night. According from your suggestion I have now change my some databases to SIMPLE Mode. I hope I don't need to worry about the log space anymore.
Thanks.
October 22, 2008 at 2:51 pm
Do you understand that in SIMPLE mode, you cannot recover your database except to the last FULL backup ?
So, for example:
Thursday, 10 pm: Full Backup
Friday, 9 am 100,000 records entered
Friday 10 am 100,000 records entered
Friday 11 am 100,000 records entered
Friday 2 pm Other very important processes run
Friday 6 pm - End of Day processing runs
Friday 9:45 pm - system crashes, or user corrupts data, or upload fails, or developer deletes a table
"Sorry boss I set the database to SIMPLE, so we lose everything from Friday and start back over with Thursday night's backup." OK ?
With FULL recovery AND transaction log backups every hour, you would say
"No problem, at least we can restore back to Friday at 9 pm" ...... maybe 9:44:59 if you can take a transaction log backup after the problem occurred.
October 22, 2008 at 3:15 pm
Leo (10/22/2008)
According from your suggestion I have now change my some databases to SIMPLE Mode. I hope I don't need to worry about the log space anymore.
Nope. I suggested that you set up regular log backups. You should only use Simple recovery if the data in the database is not important, if the loss of several hours of data is of no concern. Is that the case here? Is the loss of up to 350000 records acceptable?
Typically the only databases that should be in simple recovery are development and test environments and in situations where you can completely recreate the entire database with little effort (a replicated database, or a copy of a mainframe, etc)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 24, 2008 at 2:54 pm
Leo:
Simple recovery can only be used if you can afford to lose all the data changes since the last backup..:w00t:
Simple recovery does not require Transaction Log backups..:)
If you want to do transaction log backups then do it in a useful way, why run it once in a day, this is anyhow achieved with full backup..so increase the frequency of log backups in order to be safe with data retention...:)
Thanks!!
The_SQL_DBA
MCTS
"Quality is never an accident; it is always the result of high intention, sincere effort, intelligent direction and skillful execution; it represents the wise choice of many alternatives."
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply