September 12, 2019 at 10:52 am
Hi Folks, a general question here for better admin on a database. Have a number of tables, with approx 100 rows that clear down and re-populate every 5 minutes, however they're growing in size, up close to 10 gb even though the data is small. I deleted one of these tables, but the database is still the same size, i removed all rows and database is still the same size. How can i clear down the space. For example, i have one table now with now rows in it but the table is still over 10 gb is size. Help please!!
September 12, 2019 at 12:14 pm
Hi,
your database won't be smaler, only be deleting rows.
You have to shrink your database, if you need space on your drive. But if you shrink your datase, you may get some fragmentation on your mdf files.
You can shrink your database via gui, database ?> properties => files => just change the file size.
Or you can use a script, but be sure, the size and name should fit with your environment.
USE [master]
GO
ALTER DATABASE [your_database] MODIFY FILE ( NAME = N'your_database', SIZE = new_size_KB )
GO
Kind regards,
Andreas
September 12, 2019 at 12:54 pm
Hi Folks, a general question here for better admin on a database. Have a number of tables, with approx 100 rows that clear down and re-populate every 5 minutes, however they're growing in size, up close to 10 gb even though the data is small. I deleted one of these tables, but the database is still the same size, i removed all rows and database is still the same size. How can i clear down the space. For example, i have one table now with now rows in it but the table is still over 10 gb is size. Help please!!
I've never seen such a thing before. The only time I've seen something even close to that is if "straggler" rows are left behind where there's only a row or two left on the pages that have been deleted from. You've definitely piqued my interest and I'd like to try to duplicate the problem on my machine so, if you don't mind, I have a couple of questions...
--Jeff Moden
Change is inevitable... Change for the better is not.
September 12, 2019 at 1:09 pm
Hi, the tables are updated using SSIS package basically what happens is an sql command runs (delete from table) then a connection to an external odbc dbase (Select * from table) and mapped to the sql table
September 12, 2019 at 1:32 pm
Hi, the tables are updated using SSIS package basically what happens is an sql command runs (delete from table) then a connection to an external odbc dbase (Select * from table) and mapped to the sql table
Is there any form of WHERE clause on the "delete from table"?
just ran a report so show disk usage by table, you can see very few rows but as they keep updating, the size seems to grow
Counting from the top of the graphic you included, which table is an example of the problem? Also, what was the condition of the table when you took that "picture"? Was it right after the DELETE?
--Jeff Moden
Change is inevitable... Change for the better is not.
September 12, 2019 at 1:41 pm
what happens each time the job runs is it deletes all in the table (Delete * from table) then it loads in from the odbc to the table in sql. the first table has over 2 million records so you'd expect that to be big but the one with 1105, 55 and 2141 rows i'd expect to be a lot smaller. Even if i run a delete from any of these tables and the row count =0, the size of the table remains the same
September 12, 2019 at 1:47 pm
also if i run a query over these databases it takes longer as they're large in size, even though they don't have a large row count. In test i've created tables from scratch with the same data and the'ye only a few mb in size and return results of queries straight away. Which leads me to believe it's to do with them constantly updating
September 16, 2019 at 2:39 pm
can anyone help? tp sum up, i have a number of tables that clear the data (delete * from table) then import from an odbc table. Thios happens every 5 minutes. But even though there's only 1000 records the table size keeps growing and growing. If i creat a new table with the same odbc connection and import the data it's only a few mgb. So obviously the issue is that the table is updating every 5 minutes but how do i stop it from growing?
September 16, 2019 at 6:37 pm
The problem is your destination table is actually a HEAP - and you are deleting. The delete does not free up allocated space on the table and the next insert cannot reuse that space. In fact - inserting into a HEAP will *never* reuse a page because all inserts are added to the end.
The fix is one of the following:
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
September 17, 2019 at 2:30 pm
Hmmm... interesting. I just found out that a 3rd party database on my system has this problem and the tables are NOT heaps. Multiple Clustered and Non-Clustered indexes have the same problem and none of the Clustered Indexes contain LOB columns nor any "Row Overflow" data.
Sys.dm_db_index_physical_stats does NOT catch the issue, either. It reported the largest clustered index as only having ~3GB worth of pages, virtually zero logical fragmentation, and an average page density of >99% even though the "Unused" column for sp_spaceused for the table reported more than 26GB.
Rebuilding the indexes fixed that. DBCC CLEANTABLE had no effect.
I have no idea why the pages weren't auto-magically returned to an un-allocated status after the deletes that I know have been occurring. I did, however, find this little tidbit (see link below). It does seem to match what's happening but it also seems that this was meant for SQL Server 2000 or less but the article was updated as recently as April 2018.
Like I said before, I've never seen this issue before and previously thought it was impossible.
--Jeff Moden
Change is inevitable... Change for the better is not.
September 17, 2019 at 2:41 pm
p.s. I also verified that the Compatibility Level for the database is SQL Server 2016.
For those interested, the 3rd party database is "WhatsUp" and was created by the "WhatsUp Gold" software.
--Jeff Moden
Change is inevitable... Change for the better is not.
September 17, 2019 at 7:38 pm
Hmmm... interesting. I just found out that a 3rd party database on my system has this problem and the tables are NOT heaps. Multiple Clustered and Non-Clustered indexes have the same problem and none of the Clustered Indexes contain LOB columns nor any "Row Overflow" data.
Sys.dm_db_index_physical_stats does NOT catch the issue, either. It reported the largest clustered index as only having ~3GB worth of pages, virtually zero logical fragmentation, and an average page density of >99% even though the "Unused" column for sp_spaceused for the table reported more than 26GB.
Rebuilding the indexes fixed that. DBCC CLEANTABLE had no effect.
I have no idea why the pages weren't auto-magically returned to an un-allocated status after the deletes that I know have been occurring. I did, however, find this little tidbit (see link below). It does seem to match what's happening but it also seems that this was meant for SQL Server 2000 or less but the article was updated as recently as April 2018.
Like I said before, I've never seen this issue before and previously thought it was impossible.
This sounds like the ghost cleanup process is disabled - or it is lagging way behind. Is it possible that someone disabled the ghost cleanup process on that system?
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
September 17, 2019 at 9:54 pm
This sounds like the ghost cleanup process is disabled - or it is lagging way behind. Is it possible that someone disabled the ghost cleanup process on that system?
Not on the system. All of the other databases, including some that also do deletes, are behaving as expected. To be honest, I didn't know you could disable the ghost cleanup process. It does sound like that's a possibility for this particular database. I'll have to first figure out if it's even possible to do such a thing.
--Jeff Moden
Change is inevitable... Change for the better is not.
September 17, 2019 at 10:05 pm
Again, interesting read... I knew of the ghost process but never did a deep dive on it. I had no idea that it could be disabled and that it would also not occur on a database where you couldn't take a shared lock (although I don't believe the latter is the problem in this case). Here's the link to the MS article I reading on this subject...
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 1 through 15 (of 15 total)
You must be logged in to reply to this topic. Login to reply