May 16, 2017 at 3:21 am
Hi,
It's Live Server with 50+ users and other process.
Issue: DB size grown from 180 GB to 340GB in 3 months. Wants to bring back to 200GB DB size.
Reason: Change tracking was enabled by the application. Oneof the tracked table gets so much changes that it has entered billion of rowsinto the change tracking table.
CT_table | CT_rows | CT_reserved_MB |
change_tracking_17018 | 4233701178 | 214483.7969 |
syscommittab | 105957758 | 5484.664062 |
Work done till now:
1. Switch off the Table tracking. – All thetracking tables are gone except the sys.commitab table.
2. Database level Change tracking is still on as toswitch off, we need to make the database into single user mode, as it has lotsof other events, queues running.
3. Shrink the database files, with truncateonly,did not release any unused space.
Help Needed
Due to SAN 90% full, we have to shrink the databases.
Please can someone provide me a proper steps to get thedatabase size to the 200GB without raising any other issue.
I am been forced to shrink the database as I really want toavoid it, but have no other option.
Harsha
May 16, 2017 at 5:45 am
There's not really a whole lot you have to do. Remove the data so that you're below 200gb in size, then use the SHRINKFILE command (it gives you more granular control over the process). Since you're only talking about a one-off shrink of the database, it's not the end of the world. Rebuild the indexes when you're done.
However, if you are operating at the edge of the size of the SAN, with no head room for growth, I'd strongly recommend getting onto a bigger SAN. Data pretty much always grows. It very seldom stays at one size. If you don't address the hardware issue, then you will be looking to shrink the database again (and again). This is when you have a severe problem, not the one time shrink.
Oh, I would be cautious when deleting the data that you do it in batches to avoid bloating your log (which you'll then need to shrink). This is especially important since you say you have no room on the SAN.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
May 16, 2017 at 5:57 am
As Grant explained, you can't shrink a database without first deleting data. If all of your data is "required" (perhaps for audit, historic purposes) then you have little or no data available to delete.
If you are going to be deleting data, who makes the decision on what goes? Are you the right person to decide? You might come to the conclusion that any data in your "transactions" table from before 2014 can be deleted, but will the business be happy with that? Make sure that anything that is going to be deleted has the consent of the upper management (unless you're the sole interested party). Document it as well so that if, for whatever reason, someone asks you for data from 2013 in 4 months time, you can tell them that's no longer available, due to a management decision. What data should be "lost" is a management decision, and shouldn't be taken by a DBA.
If no one is happy to accept that data needs to be deleted, then, again like Grant said, you need to increase your storage media; but you should be considering that anyway. If the size of the database has increased by 160GB~ in 3 months, what's stopping it increased by another 160GB in the next three months? You're probably going to see the same problem in a few months time, and then even more data will need to go. It'll be an endless cycle.
Thom~
Excuse my typos and sometimes awful grammar. My fingers work faster than my brain does.
Larnu.uk
May 16, 2017 at 6:04 am
Have you considered data compression?
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
This thing is addressing problems that dont exist. Its solution-ism at its worst. We are dumbing down machines that are inherently superior. - Gilfoyle
May 16, 2017 at 8:25 am
Hi,
Thank you all for all replies.:-)
1. The database size got increased by 160 GB because of the Change Tracking tables. Change tracking has been disabled.
2. The change tracking data was for some element in the application what is not required by any user in our system.
I have tested in UAT the following things with the live back up copy of the database.
1. Disabled the Change tracking.
2. Shrunk the database to 200 GB which took almost 2 hrs.
3. Deleted the syscommitab table with million of rows in 1 hrs. (When this was going on, I was taking every 15 mins transaction log backup.
4. I will be shrinking the database files again to release the unused space.
5. I will be running a re-index job to re-index the whole tables.
Please do comment, if I have to anything else.
Thanks,
Harsha
May 16, 2017 at 10:46 am
HBhagat - Tuesday, May 16, 2017 8:25 AMDeleted the syscommitab table with million of rows in 1 hrs. (When this was going on, I was taking every 15 mins transaction log backup.
This might be better chunked up to avoid the log growing. Even if you're doing log backups, if it's all one transaction, the log won't truncate until that transaction is completed.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
May 17, 2017 at 6:10 am
Grant Fritchey - Tuesday, May 16, 2017 10:46 AMHBhagat - Tuesday, May 16, 2017 8:25 AMDeleted the syscommitab table with million of rows in 1 hrs. (When this was going on, I was taking every 15 mins transaction log backup.This might be better chunked up to avoid the log growing. Even if you're doing log backups, if it's all one transaction, the log won't truncate until that transaction is completed.
Yes. But I we have 45-GB space allocated for the trn backup drive. 🙂
May 17, 2017 at 6:11 am
HBhagat - Wednesday, May 17, 2017 6:10 AMGrant Fritchey - Tuesday, May 16, 2017 10:46 AMHBhagat - Tuesday, May 16, 2017 8:25 AMDeleted the syscommitab table with million of rows in 1 hrs. (When this was going on, I was taking every 15 mins transaction log backup.This might be better chunked up to avoid the log growing. Even if you're doing log backups, if it's all one transaction, the log won't truncate until that transaction is completed.
Yes. But I we have 45-GB space allocated for the trn backup drive. 🙂
450GB ... not 45 🙁
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply