Emptying data file on very big DB

  • After splitting initial 5Tb DB into 2 we have fragmented data files in both DBs (a lot of unused space). As shrinking DB did not work for us I started dbcc shrinkfile (‘myDataFile’, emptyfile) command (there are 32 data files in each DB). But releasing free space is going very-very slowly- from 20G file allocated space only 1G was added to available space for 10 days.

    Does anybody have experience with emptying file on big DB? Thanks

  • 1 TB was shrinkdatabasing for over a day, when I had to cancel it. 100 gb file was shrinkfiling for about 10 hours. However we got data back in 3 months. So it was a waste of time, in a way. Why are you using emptyfile? You can use "size left" in MB instead.

  • We decided to play (development Sql Server certainly) with files/filegroups to save backup time. Our DB has 34 files (all in primary filegroup) with a lot of available space (from 4.5 Tb it’s around 50%). As shrinking file attempt failed my intention was to 1)empty some files, 2)create some filegroups (let say 6 = number of hard drives) and 3)move/recreate some files to new filegroups. As a result we can backup by filegroup. Unfortunately I stuck with emptying files. Thanks

Viewing 3 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic. Login to reply