I've been using SQL 2005 on a part time basis for a while now (those SQL
2000 servers work pretty good you know!) and I'm still intrigued at the things I
find that have changed the rules of the game. It's pretty commonly known that
SQL pages are 8k, and that the max row size is 8060 bytes (though Steve Jones
proved that isn't always the case in
the Maximum Page Size in SQL Server 2000? But in SQL 2005, the max row size
of 8060 bytes is even less true than it was before!
Assuming I'm not the only one that still have a SQL2K instance, create a test
table as follows:
CREATE TABLE [dbo].[TestTable]( [LargeColumn1] [varchar](8000) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [LargeColumn2] [varchar](8000) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ) ON [PRIMARY] GO
You should get the following warning:
Warning: The table 'TestTable' has been created but its maximum row size
(16025) exceeds the maximum number of bytes per row (8060). INSERT or UPDATE of
a row in this table will fail if the resulting row length exceeds 8060 bytes.
The problem is that this table could work fine for a day, a week, or a
lifetime, and then boom, you get bitten by your bad table design. Just to prove
that really happens, let's inject a row and then make some changes:
insert into Testtable (LargeColumn1, LargeColumn2) values ('Blah1', 'Blah2') update testtable set largecolumn1 = replicate ('0123456789', 800)
At this point LargeColumn1 is 8000 bytes, leaving us the 60 or so bytes for
column LargeColumn2. Let's see what happens if we try to update it with 100
characters:
update testtable set largecolumn2 = replicate ('0123456789', 10)
We get this lovely error:
Msg 511, Level 16, State 1, Line 1
Cannot create a row of size 8113 which is greater than the allowable maximum
of 8060.
The statement has been terminated.
In fact, the most we can stuff into LargeColumn2 is an additional 47 bytes.
Now suppose we try our scenario on a SQL 2005 instance, what do you think might
happen? The first thing is you get NO WARNING that you're exceeding the 8060
byte when you run the create table statement . Why? Because it's not a real
limit anymore! Here is an except from BOL (search on row-overflow to find the
entire detail):
Surpassing the 8,060-byte row-size limit might affect performance because
SQL Server 2005 Database Engine still maintains a limit of 8 KB per page. When a
combination of varchar, nvarchar, varbinary, sql_variant,
or CLR user-defined type columns exceeds this limit, the Database Engine moves
the record column with the largest width to another page in the
ROW_OVERFLOW_DATA allocation unit, while maintaining a 24-byte pointer on the
original page. Moving large records to another page occurs dynamically as
records are lengthened based on update operations. Update operations that
shorten records may cause records to be moved back to the original page in the IN_ROW_DATA allocation unit. Also, querying and performing other select
operations, such as sorts or joins on large records that contain row-overflow
data slows processing time, because these records are processed synchronously
instead of asynchronously.
We can fully populate both columns with no error message:
insert into Testtable (LargeColumn1, LargeColumn2) values ('Blah1', 'Blah2') update testtable set largecolumn1 = replicate ('0123456789', 800) update testtable set largecolumn2 = replicate ('0123456789', 800)
This new behavior affects the entire record
rather than a single column, and in the worst case you could have parts of columns
bouncing back and forth between in row and out of row storage. Here's another
interesting twist; setting the compatibility level to 70 or 80 on a SQL 2005
instance does not change the behavior - so you could update a SQL 2000 server,
set SQL2K compatibility, and still use the new behavior. One side affect of that
is that if you had designed a table in SQL 2000 where the total width exceeded
the maximum and always worried that you might get burned someday this could save
you!
I haven't decided if I like the new behavior or not. My Zen side says that
there is a time and place for all options, but the more practical side says that
sometimes it's good to have some solid rules to work within. One thing I do
dislike is that the new behavior is silent. If you've been a DBA for a while you
probably rely on that warning message to keep you from creating super wide
tables by accident, but now you'll have to work harder and total up the bytes. I
think a new version of the warning message would have been nice, something like
this:
Feature Change Warning: The table 'TestTable' has been created but its maximum row size
(16025) exceeds the maximum number of bytes per row (8060). On instances of SQL
2000 and earlier INSERT or UPDATE of
a row in this table will fail if the resulting row length exceeds 8060 bytes. On
SQL 2005 instances using any compatibility level there are cases where it is
possible to exceed the 8060 byte restriction. Please see row-overflow in BOL for
more information.