December 30, 2003 at 6:12 am
Greetings.
I currently have a server with 3 large DB's on.
The largest (2 of them) are currently using around 1-1.5 TB each
I am running out of space at a rapid rate. Every night I use up around 20GB but require around 200GB due to log files.
What I want to do is look at the tables. These were designed long before I arrived.
When running
Select --top 100
O.Name,
C.Name
From
SysObjects o JOIN SysColumns C ON O.ID = C.ID
Where
O.Type = 'U'
ANDC.XType = 167 --varchar
ANDC.Length = 50 --width
I get 3428 columns which are varchar(50) (EM Default)
Now, I know that SQL uses 8064 byte pages.
I (am farley sure) that it will add as many rows as it can into a page. Provided (row size * number of rows) < (8064 - 96)
If I have rows which are varchar(50) (defaulted by EM) would:
SQL use the max permissible row size based on the max column widths or actual size?
I know that most columns WILL NOT contain 50 but MUCH (<20-30) less.
Is SQL adding only 1 rows because of the max width? or is it using the actual width?
Assume SQL is using the max width. After resizing the column widths, would SQL automatically reorder the rows? How do I check the number of rows per page?
If it's using the actual size, then I am running into a wall. 🙁 Not good.
Most of the tables have millions of rows. Typically < 20. Some of the partitioned views contain > 300 million. Hence the underlying tables are large.
Don't bother asking about scaling out. Have ordered more space but waiting for company who supports the machine. Not gonna happen this week / month.
Any ideas how I can optimise the drive useage?
All the space used is by data files and not logs.
Thanks
Cheers,
Crispin
Cheers,CrispinI can't die, there are too many people who still have to meet me!It's not a bug, SQL just misunderstood me!
December 30, 2003 at 9:43 am
Lot of questions outta there
As far as I know SQL Server is using only the actual size of the varchar data (with some extra bytes to store the real lenght and some other bytes for the NULL columns), so you can strat running into the wall
You can create a table having a larger row size as 8060 but you will have an error if you will really get larger then this.
If you are sure that your varchar columns won't nevet get larger then 20-30 bytes then I would suggest to reduce the declared size of thoses columns.
In your particular case I would check the following thing which may reduce the db size:
[*]And last but not least think about buying some alternative storage like a SATA disk subsystem. It is a bit slower but definitly not as much expensive as an FC or SCSI box. It is well suitable for online archive data
Bye
Gabor
Bye
Gabor
December 30, 2003 at 10:02 am
You may want to run
DBCC SHOWCONTIG () WITH TABLERESULTS
It will Answer most of your questions!
* Noel
December 30, 2003 at 10:30 am
Gabor, thanks for the info.
quote:
You can create a table having a larger row size as 8060 but you will have an error if you will really get larger then this.
I knew that. Should have answered my own question.
quote:
Is it possible to archive/remove some old data?
No, I need to retain all the data for 4 years.
quote:
Normalise your db as much as possible to eliminate the redundancy
Again, not. It's a warehouse. If I normalise it more than it is, box gets to busy.
quote:
Do not allow NULL because NULL marquers are "eating" some extra space
Will try.
quote:
Do not use nchar, nvarchar, ntext as it is taking twice the space needed for a normal varchar, char, text
Not used anywhere
Gonna look into this. I know this is a problem. Many fields are datetime but contain only date.
[/quote]
If you have fixed length data in the varchar columns change it to char datatype as some extra bytes are used to store the real length of the column /*]
[/quote]
This can be a "dangerous" one. A null varchar uses only a couple of bytes where as a null char will use the entire width.
Many nullable fields. Yes, see above 🙂
quote:
Use integer datatype for char/varchar columns representing exclusivly integer data (I've seen it several times)[/quote]
This was "apparently" attempted a while ago. Problem was, they were needed as the varchar type. The extra processor power required for converting back to varchar was not viable.
quote:
[*]And last but not least think about buying some alternative storage like a SATA disk subsystem. It is a bit slower but definitly not as much expensive as an FC or SCSI box. It is well suitable for online archive data[/quote]
I can only make recommendations about the hardware. There is another company that supports it. Not under my control.
Unfortunately, ALL data for the entire period is required. Moving old data onto a slower / cheaper disk(s) would hinder performance globally due to partitioned views accessing “old” data.
noeld:
Thanks but how would fragmentation consume space? I can understand a performance issue. But space?
Cheers,
Crispin
Cheers,CrispinI can't die, there are too many people who still have to meet me!It's not a bug, SQL just misunderstood me!
December 30, 2003 at 10:48 am
Fragmentation is definitly using more space (and therefore hinder the performance)
If you want to insert a new row to a page and there is not enough free space to store it they will be a page split and some data will migrate from the old page to the new one among with the new row.
So by the fragmentation you can have several times the size of your data.
I've seen it by my own when I was called to a customer who has never done any maintenance tasks. They was complaining about the slow reaction times. an dbreindex helped them a lot.
Bye
Gabor
Bye
Gabor
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply