November 2, 2001 at 12:00 am
Comments posted to this topic are about the content posted at http://www.sqlservercentral.com/columnists/sjones/pagesize.asp
February 4, 2002 at 7:36 am
Thanks for the interesting information. I'm always interested in learning more about how it all fits together. Keep up the good work.
February 4, 2002 at 11:02 am
February 4, 2002 at 2:28 pm
This article was interesting to me. I have gotten the error when creating tables, however when I create a table in SQL Server 7.0 it lets me even though I exceed the max row size.
Robert Marda
Robert W. Marda
Billing and OSS Specialist - SQL Programmer
MCL Systems
February 4, 2002 at 5:48 pm
Access 97 was like that, but Access2K enforces the limit at design time. Not sure which method I like better!
Andy
February 5, 2002 at 11:32 am
You can create the table in SQL 2000 as well, but you will have issues if the data exceeds the size.
Steve Jones
February 10, 2002 at 5:55 pm
I agree that MS doesn't do a very good job documenting the page size. While it's interesting to poke around the internals, would be a lot easier if they either just quoted the number you came up with or made it clearer in the docs. I try to avoid getting anywhere NEAR 8000 bytes in width, but if you need to go for the max for whatever reason is nice to know what is really doable. Good article.
Andy
February 11, 2002 at 4:29 am
I found the article informative and quite helpfull. I have long been curious about the size but had no time to test this myself. Good piece of work.
February 12, 2002 at 10:47 am
February 19, 2002 at 9:19 am
I haven't hit this problem yet, but will certainly help me when I stumble upon things of this nature. Great Article!!!
April 29, 2002 at 1:25 am
that's a great one indeed. I stumbled on this when i guided one of my friend's academic projects and ended up blaming ms for poor documentation and we splitted the columns. but we didn't think about researching on finding out the actual size. we ended up commenting "hey.. this thing is doing something more without telling us.". thanks a lot for the article
~*~~*~~*~~*~~*~~*~~*~~*~~*~~*~~*~~*~~*~~*~~*~~*~~*~~*~~*~~*~
It's the music that matters for a soulful experience... not the instrument.
~*~~*~~*~~*~~*~~*~~*~~*~~*~~*~~*~~*~~*~~*~~*~~*~~*~~*~~*~~*~
December 5, 2003 at 3:01 pm
I first read it in the Best of... book, reread it now, and think it is undervalued and has not had the audience yet it deserves!!!
Frank
--
Frank Kalis
Microsoft SQL Server MVP
Webmaster: http://www.insidesql.org/blogs
My blog: http://www.insidesql.org/blogs/frankkalis/[/url]
December 5, 2003 at 6:57 pm
Thanks for the complement!!!!
Steve Jones
http://www.sqlservercentral.com/columnists/sjones
The Best of SQL Server Central.com 2002 - http://www.sqlservercentral.com/bestof/
October 5, 2004 at 3:38 pm
The article is really good. Well done.
raj
May 18, 2006 at 11:59 pm
Great article, Steve. You know, I think that the reason that they do not get too much into overhead is because it varies with the number of columns and so forth. For example, on SQL 2005 (which has a much better error message for this than its predecessors did!), look what happens with two create table statements, both of which would nominally require 8060 bytes of storage for the datatypes used, all of which are fixed-length and non-nullable:
First statement: create table dbo.t1 (x char(60) not null, y char(8000) not null) Msg 1701, Level 16, State 1, Server MINOTAUR, Line 1
Creating or altering table 't1' failed because the minimum row size would be 8067, including 7 bytes of internal overhead. This exceeds the maximum allowable table row size of 8060 bytes.
Second statement: create table dbo.t1 ( int1 bigint, int2 bigint, ... -- whole lot of column definitions omitted :o) int1006 bigint, int1007 bigint, int1008 int)
Msg 1701, Level 16, State 1, Server MINOTAUR, Line 1 Creating or altering table 't1' failed because the minimum row size would be 8192, including 132 bytes of internal overhead. This exceeds the maximum allowable table row size of 8060 bytes.
The stated difference in overhead (7 bytes for the first statement and 132 for the second) is pretty significant. So it would appear that the answer to the question "what is the largest number of bytes of actual data that I can stuff into the base row (ie., not using large datatypes)" is (all together now...) "it depends."
You've definitely given me something to think about here - I'd never thought about the space used by that back pointer when calculating space usage. This might help me explain why a table I built with FILLFACTOR 60 (carefully calculated and highly predictable) keeps getting lots of page splits!
Thanks a lot for bringing all of this to our attention!
Cheers,
Chris
Viewing 15 posts - 1 through 15 (of 29 total)
You must be logged in to reply to this topic. Login to reply