March 9, 2011 at 1:59 pm
Have a production server with a 1.2tb db drive. Got down to only about 10% free space available. So, I have moved about 200gb of data off the databases.
Do I have to shrink the databases to reclaim this space? My data (.mdf) files have not gone down in size. Maybe just restart the SQL service?
I am testing the shrink in a test environment on 560gb database (6% to gain from compression according to SQL GUI) and it's still going, plus the front end system/app that runs on that db is completely froze. My only window to work on this production system is Friday 11:30pm through 6am Saturday.
Suggestions, advice?
March 9, 2011 at 2:34 pm
Removing data from a database won't make the files shrink. You have to actually tell it to shrink them to get that to happen.
Keep in mind that shrinking database files can result in serious data fragmentation, and you'll probably need to do index maintenance after the shrink to handle that.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
March 9, 2011 at 3:04 pm
Now that you have moved data out and have free space within the database, do you actually need to shrink the file?
If you do, try 'truncateonly' first, see if that gets back enough. That will not fragment your database. If it truncates too much, use alter database to grow it back to the size you really want in one chunk.
If you have to shrink, do it in a quite time and shrink it in chunks so it is not as intrusive and does not run for ages. A shrink can be cancelled and any data movement done so far will not be lost, file size won't change though.
---------------------------------------------------------------------
March 10, 2011 at 11:38 am
I shrunk the entire db and it took about 6 hours. However, it did not release any drive space on the database (.mdf) file. I also restarted sql afterwards.
I dropped at least 150gb of tables yesterday, but have nothing to show for it.
I ran the shrink in a test environment as well on a different db, and the .mdf size did decrease?? My sql 2005 (enterprise) w/SP2 settings are the same on both boxes. Don't get it...
March 10, 2011 at 11:49 am
What command did you run? I don't like shrinkdb because you can't control it well. I prefer shrinkfile if I need to get back space and clear things up.
However, do you need the 150GB back? If you won't use this space in the next 3-6 months, then it makes sense, but if you might use it again with data growth, then you're just wasting resources messing with the mdf file size. You want, and need, free space in the database.
March 10, 2011 at 12:06 pm
Steve Jones - SSC Editor (3/10/2011)
What command did you run? I don't like shrinkdb because you can't control it well. I prefer shrinkfile if I need to get back space and clear things up.However, do you need the 150GB back? If you won't use this space in the next 3-6 months, then it makes sense, but if you might use it again with data growth, then you're just wasting resources messing with the mdf file size. You want, and need, free space in the database.
I used command: DBCC SHRINKDATABASE(N'DB' )
Don't have much disk space free on my data partition. My disk drives are on an EMC SAN system and linked by fiber. My disk queue length is pegged at 100% almost non-stop and the disk I/O at 90-98% percent usage.
My goal was to trim some fat off of the db, by ridding old and unused excess tables and data. Seems like disk contention...
Thoughts?
March 10, 2011 at 12:56 pm
The amount of space in the MDF file has zero to do with the performance of the disks. If you have high I/O, then you either don't have enough spindles, or perhaps some firmware/HBA issue in the xfer with the servers.
Disk Q length ought to be a number, not a %.
March 10, 2011 at 2:49 pm
Steve Jones - SSC Editor (3/10/2011)
The amount of space in the MDF file has zero to do with the performance of the disks. If you have high I/O, then you either don't have enough spindles, or perhaps some firmware/HBA issue in the xfer with the servers.Disk Q length ought to be a number, not a %.
So, if I have a one TB drive with 100gb's available and can reduce my data file by 200gbs - you're telling me that going from 10% free to 30% free space has nothing to do with performance of the disks? If not, please explain...
I have been monitoring disk queue length from the perfmon and see it being pegged at 100% - technically the average for the last 2 days is 30. I have also seen 4800 IO's/Sec at the highest.
Thanks
March 10, 2011 at 4:12 pm
For SQL Server, for your mdf/ldf disks, free space on the disk drive has nothing to do with the performance of the disks. Why would it? There's no swapping out there, and even if there is, it's not like your Windows pagefile. The free space doesn't matter.
If you are monitoring very high levels of disk activity, then you don't have the right I/O system performance. That's not dependent on disk space.
Disk Q Length is a number. It's the number of requests waiting for disk access. If you see 100, then you have a Q of 100, which may or may not be an issue, depending on your SAN. However you need to get a SAN person to help you. If you see 100%, then you are monitoring something else. What is the exact counter?
March 10, 2011 at 4:50 pm
Steve Jones - SSC Editor (3/10/2011)
For SQL Server, for your mdf/ldf disks, free space on the disk drive has nothing to do with the performance of the disks. Why would it? There's no swapping out there, and even if there is, it's not like your Windows pagefile. The free space doesn't matter.If you are monitoring very high levels of disk activity, then you don't have the right I/O system performance. That's not dependent on disk space.
Disk Q Length is a number. It's the number of requests waiting for disk access. If you see 100, then you have a Q of 100, which may or may not be an issue, depending on your SAN. However you need to get a SAN person to help you. If you see 100%, then you are monitoring something else. What is the exact counter?
The exact counter average is 30.00. The max right now showing 69.00. My SAN guy is going to dig into the drives tonight.....hopefully he can some up with something.
March 10, 2011 at 8:31 pm
Best of luck there. 30 seems high, but it really depends on your SAN, the spindles, fiber speeds, etc.
March 11, 2011 at 5:26 am
My guy didn't come up with anything - except that the the I/O's are crazy high even when we don't have heavy volume on the system.
March 11, 2011 at 8:22 am
SkyBox (3/10/2011)
The exact counter average is 30.00. The max right now showing 69.00. My SAN guy is going to dig into the drives tonight.....hopefully he can some up with something.
Today the max disk q length hit 4668.00
March 11, 2011 at 9:45 am
I hope you're not using ntfs compressed drives (luns) ! This is one of the worst things you can do.
What's the page life expectance for this instance ?
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
March 11, 2011 at 9:46 am
Disk queue isn't really the best counter to use where SAN is concerned, IMO. Try these:
http://geeksandwich.blogspot.com/2010/12/my-top-10-sql-perfmon-counters.html
HTH,
-David.
Viewing 15 posts - 1 through 15 (of 18 total)
You must be logged in to reply to this topic. Login to reply