December 28, 2007 at 10:03 am
I'm using SQL Server as a "warehouse" for old databases. Basically, I create an access database of ~5 million lines from text files I get each week that provides a detailed look inventory information for the company I work for. Each week we get new text files as the system changes are drastic enough to warrant it. The previous week's files are migrated to SQL Server for storage and back linked into Access in case I need them. All my data is stored on a mirrored RAID array so I have redundency in the event of a drive failure
This process has worked well for me until now. I'm running out of drive space (~1.3gbs per week will file a 250gb hd fast). So I looked through my tables and found two columns in each week's report that I can calculate based on other data of I need it so I removed them from each table. The problem is that I can find the SQL Server equivalent of Access's "Compact and Repair" function. I know in Access if I remove those two coulmns I can reduce file size by ~200mb so this is worth doing. Can someone explain to me how to do the equivalent in SQL Server?
Regards,
Chris
January 2, 2008 at 9:34 am
Hi, a few questions 😉
How do you store your data in SQL database: do you use char or varchar data type in biggest fields? Do you use appropriate types for given ranges of values?
What is your recovery model for database? Do you create backups?
These are first things that come into my mind as far as disk space usage optimization is concerned.
Read Tony Rogerson's article about regaining space after dropping columns. Also look in BOL for "shrinking databases".
Are you aware that SQL Server Express database can grow up to 4GB only?
HTH
Piotr
...and your only reply is slàinte mhath
Viewing 2 posts - 1 through 1 (of 1 total)
You must be logged in to reply to this topic. Login to reply