June 10, 2013 at 6:42 am
With sp_space_used you get the 'gross' size of a table. (Used_size for data)
Summing all fields with a datalength function also gives a size for a table, offcourse this is without the overhead.
The used_size is about 2 to 3 times the calculated size.
So how can I calculate the size more accurately?
Thanks for your time and attention.
ben brugman
Extra info:
This was calculated for heaps and clusters (90 percent fill rate), both were similar.
No deleted rows.
From 30 to 75 columns for each table, with a lot of NULL's.
Overhead was 3 to 5 bytes for each field.
(180 to 280 bytes of overhead for each row).
June 10, 2013 at 6:46 am
ben.brugman (6/10/2013)
So how can I calculate the size more accurately?
The actual space used by a table is what sp_spaceused reports.
_____________________________________
Pablo (Paul) Berzukov
Author of Understanding Database Administration available at Amazon and other bookstores.
Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.June 10, 2013 at 8:15 am
PaulB-TheOneAndOnly (6/10/2013)
ben.brugman (6/10/2013)
So how can I calculate the size more accurately?The actual space used by a table is what sp_spaceused reports.
Thanks for your reply, but this does not anwser my question.
The links:
http://msdn.microsoft.com/en-us/library/ms189124.aspx
http://msdn.microsoft.com/en-us/library/ms178085.aspx
Do help, but do not even come close to the reported use using sp_spaceused.
Thanks,
ben
June 10, 2013 at 8:47 am
Do you mean calculate more accurately the size before actually creating the table?
If so, at least some of inaccuracy may be that you would need to factor in the block size. If your row uses 4.5kb and you expect to have 1,000,000 rows you might have calculated 4.5gb but then because you can only fit one row within an 8k block (4.5 * 2 = too much) you end up using 1,000,000*8k blocks (about 8gb).
For smaller rows I'm not sure why it would be so far out...maybe incorrect calculations?
Indexes is easier to guess (fragmentation)~
June 10, 2013 at 9:11 am
ben.brugman (6/10/2013)
With sp_space_used you get the 'gross' size of a table. (Used_size for data)Summing all fields with a datalength function also gives a size for a table, offcourse this is without the overhead.
The used_size is about 2 to 3 times the calculated size.
So how can I calculate the size more accurately?
Thanks for your time and attention.
ben brugman
This discussion of the subject may be of help to you.
June 10, 2013 at 10:28 am
Dird (6/10/2013)
Do you mean calculate more accurately the size before actually creating the table?
At the moment I am trying to do the calculation on existing tables. This allows me to check the method. This brings understanding and will help to calculate the size of future tables.
Ben
June 10, 2013 at 11:09 am
bitbucket-25253 (6/10/2013)
This discussion of the subject may be of help to you.
As you can see in the third message in this thread I allready mentioned exactly that link. The referenced page is actually only a very rough guide.
If everything seems to be going well, you have obviously overlooked something.
For me it's not obvious what I am overlooking, hence the question.
Because the large number of NULL's, the calculation on the web page is not very accurate. I'll try to come up with a more accurate calculation. Have to count the actual number of Nulls in the variable record fields, I'll need access to the table for this and at the moment I do not have access.
In that way I can see the amount of overhead (quite a lot) which I am missing.
Ben
June 11, 2013 at 7:17 am
June 12, 2013 at 10:30 am
Sean Pearce (6/11/2013)
ben.brugman (6/10/2013)
If everything seems to be going well, you have obviously overlooked something.
For me it's not obvious what I am overlooking, hence the question.
Don't mistake the cheeky signature line as a rude comment.
Well I sure missed that one.
(English being my third language, the line was above the name and being technical also means that I tend to take things more literal than 'right' brained people.). I do like the quote though.
thanks,
ben
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply