August 19, 2013 at 7:10 am
In SQL Server 2008r2, the maximum row size of 8060 is not applicable to varchar(max), nvarchar(max), varbinary(max), text, image, or xml columns.
I thought that this was a statement of fact, but I'm having some difficulties with an upgrade test for a backup of one of our customer databases to the latest version of our schema.
The table in question has 23 columns and we're adding 2 new columns. 1 column is an XML column, all of the rest are fixed width columns (no TEXT, no IMAGE, no MAX size). I ran through the upgrade script for that table, it essentially drops the msrepl_tran_version column, adds two new columns (CHAR(1) and BIGINT) then adds the msrepl_tran_version (UNIQUEIDENTIFIER) column back with the default constraint of NEWID() (I know that none of this is necessary, it's something we have to do with our OEM version of the software because the guys we OEM to insist on ordinal positions of columns being the same and msrepl_tran_version always being the last column).
Anyway, whilst adding the column back I get the following error: -
Msg 511, Level 16, State 1, Line 4
Cannot create a row of size 8063 which is greater than the allowable maximum row size of 8060.
I wrote a script to take a look at the data, as I assume that this is a data issue since upgrading a blank version of this schema to the latest one has no issue but can't see any issues. When I run the script like this: -
DECLARE @table VARCHAR(30) = 'TB_DIM_ITEM_PKEY_MAP', @idcol VARCHAR(20) = 'PKEY_MAP_URN', @sql NVARCHAR(MAX);
SET @sql = 'SELECT ' + @idcol + ' , ROW_SIZE = (0';
SELECT @sql = @sql + ' + ISNULL(DATALENGTH(' + NAME + '), 1)'
FROM syscolumns
WHERE id = object_id(@table) --AND name <> 'XML_DATA'
;
SET @sql = @sql + ') FROM ' + @table + ' ORDER BY ROW_SIZE DESC';
EXEC sp_executesql @sql;
The biggest row size is '130273', however if I uncomment the XML_DATA part to exclude the XML column then the biggest row size is '190'.
So, I can't honestly see the issue. I've setup a test where I am inserting the data from this table to a copy of the table with the new definitions, one row at a time, to see if I can find the "bad" data. But since there is a lot of data in the table, this will take some time. I was wondering if someone else had come across the same issue? Or if someone knows more about it than me? 😉
August 19, 2013 at 7:25 am
Personally, I'd label this as a bug and it's quite annoying, but I believe this happens because of the large data types in row optimisation.
If the original data was small enough to fit in row with the XML data also in row, adding two new columns won't remap the XML column to a pointer and push it out of row, therefore you hit this error if, for e.g. your XML data just fitted before and you had a row size of just under 8060, but the new columns push it over the edge.
You should find that creating a new table with the new columns and migrating the data over works fine.
August 19, 2013 at 7:48 am
HowardW (8/19/2013)
Personally, I'd label this as a bug and it's quite annoying, but I believe this happens because of the large data types in row optimisation.If the original data was small enough to fit in row with the XML data also in row, adding two new columns won't remap the XML column to a pointer and push it out of row, therefore you hit this error if, for e.g. your XML data just fitted before and you had a row size of just under 8060, but the new columns push it over the edge.
You should find that creating a new table with the new columns and migrating the data over works fine.
Sounds like a plausible explanation. I've started writing a migration script for the table which I'm sure will solve the issue, just wish I could prove that this is the cause. I tried by grabbing a few of the rows with datalength around 8060 and inserting into a new table then adding the new columns to the new table but unfortunately, it worked.
Thanks for the quick response Howard.
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply