April 4, 2018 at 2:09 pm
We have 1 table in our database which is about 220 GB in size and since we are running out of space, we decided to archive some of the data but even when I am deleting old records, I still see that the available space on that drive hasn't changed. I have deleted almost 10 million records. How do I reclaim that space back? When I check to see how much space this table is using after I am deleting the data, I am seeing different values each time I am deleting records.
Query to check the size of the DB SELECT
s.Name AS SchemaName,
+''''+ t.NAME+''''+',' AS TableName,
p.rows AS RowCounts,
SUM(a.total_pages) * 8 AS TotalSpaceKB,
CAST(ROUND(((SUM(a.total_pages) * 8) / 1024.00), 2) AS NUMERIC(36, 2)) AS TotalSpaceMB,
SUM(a.used_pages) * 8 AS UsedSpaceKB,
CAST(ROUND(((SUM(a.used_pages) * 8) / 1024.00), 2) AS NUMERIC(36, 2)) AS UsedSpaceMB,
(SUM(a.total_pages) - SUM(a.used_pages)) * 8 AS UnusedSpaceKB,
CAST(ROUND(((SUM(a.total_pages) - SUM(a.used_pages)) * 8) / 1024.00, 2) AS NUMERIC(36, 2)) AS UnusedSpaceMB
FROM
sys.tables t
INNER JOIN
sys.indexes i ON t.OBJECT_ID = i.object_id
INNER JOIN
sys.partitions p ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id
INNER JOIN
sys.allocation_units a ON p.partition_id = a.container_id
LEFT OUTER JOIN
sys.schemas s ON t.schema_id = s.schema_id
WHERE
t.NAME NOT LIKE 'dt%'
AND t.is_ms_shipped = 0
AND i.OBJECT_ID > 255
and t.name = 'EmailActivity'
GROUP BY
t.Name, s.Name, p.Rows
ORDER BY
s.name, t.Name
April 4, 2018 at 2:37 pm
You won't see the available space after deleting data until a shrink occurs. Most experts agree, as long as this is a one-time shrink, and the data is not expected to grow back to the size it is today, it is ok to shrink the data file. If you did go that route, you would want to rebuild all your indexes due to the fragmentation shrinking causes. Then, after the shrink, when the data again grows, if there is no free space in the file for it to grow, you may suffer a performance hit. So, it is advised to expand the data file accordingly to accommodate expected growth.
April 4, 2018 at 3:52 pm
RVSC48 - Wednesday, April 4, 2018 2:37 PMYou won't see the available space after deleting data until a shrink occurs. Most experts agree, as long as this is a one-time shrink, and the data is not expected to grow back to the size it is today, it is ok to shrink the data file. If you did go that route, you would want to rebuild all your indexes due to the fragmentation shrinking causes. Then, after the shrink, when the data again grows, if there is no free space in the file for it to grow, you may suffer a performance hit. So, it is advised to expand the data file accordingly to accommodate expected growth.
If "Instant File Initialization" is enabled, the "performance hit" will be trivial.
The big thing here (no pun intended) is how big is the largest index? That's important because all index rebuilds over 128 extents (that's just 8MB), will cause a new index to be created and committed before the old one is dropped and that may cause very large and totally unnecessary free space in the MDF file. There is a way around that but we need the OP to tell us what the size of the largest index actually is, first.
--Jeff Moden
Change is inevitable... Change for the better is not.
April 9, 2018 at 8:32 am
Jeff Moden - Wednesday, April 4, 2018 3:52 PMIf "Instant File Initialization" is enabled, the "performance hit" will be trivial.The big thing here (no pun intended) is how big is the largest index? That's important because all index rebuilds over 128 extents (that's just 8MB), will cause a new index to be created and committed before the old one is dropped and that may cause very large and totally unnecessary free space in the MDF file. There is a way around that but we need the OP to tell us what the size of the largest index actually is, first.
+! to what Jeff said. Keep a pad of space for maintenance. Also for data growth. I like to keep 3-4 months of pad, just in case.
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply