May 22, 2015 at 1:16 pm
I have a strange problem that I have never seen before. I am trying to alter a table and add a field to it and I am greeted with an error that I would exceed maximum row size; when in fact I would not even be remotely close:
Here is the statement:
IF NOT EXISTS(SELECT * FROM [addb15].[FSParallel].INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'AccountingPeriod'
AND COLUMN_NAME = 'ImportIdentity') BEGIN EXEC [addb15].[FSParallel].sys.sp_executesql N' ALTER TABLE [dbo].[AccountingPeriod] ADD [ImportIdentity] int' END
Here is the table that I want to alter:
CREATE TABLE [dbo].[AccountingPeriod](
[AccountingPeriodGUID] [uniqueidentifier] NOT NULL CONSTRAINT [DF_AccountingPeriod_AccountingPeriodGUID] DEFAULT (newsequentialid()),
[AccountingPeriodDate] [smalldatetime] NOT NULL CONSTRAINT [DF_AccountingPeriod_AccountingPeriodDate] DEFAULT (getdate()),
[WeekOfMonth] [tinyint] NOT NULL,
[BiWeeklyCycle] [tinyint] NOT NULL,
CONSTRAINT [PK_AccountingPeriod] PRIMARY KEY CLUSTERED
(
[AccountingPeriodGUID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
Here is my error message:
Msg 1701, Level 16, State 1, Line 1
Creating or altering table 'AccountingPeriod' failed because the minimum row size would be 8061, including 8035 bytes of internal overhead. This exceeds the maximum allowable table row size of 8060 bytes.
Does anyone have a clue? I am totally lost how this is happening.
<hr noshade size=1 width=250 color=#BBC8E5> Regards,Jeffery Williams http://www.linkedin.com/in/jwilliamsoh
May 22, 2015 at 2:22 pm
I know this does not explain your problem, but have you tried creating a new table with the new definition? Then move the data (you pick the method, I like SSIS) then drop the old one?
Regarding the error itself, I am not sure, but I think that the ALTER statement does the change in place and on the fly, keeping old and adding the new data during the process, then drop once done. You may be exceeding the actual max size if my statement is correct.
May 22, 2015 at 2:25 pm
I prefer to code it and yes I tried that, good thought.
I am a contractor and have been at this site for a few weeks. The table has several FK relationships and while I can recreate those I would rather not be that disruptive. With that said I would also like to know what it going on.
<hr noshade size=1 width=250 color=#BBC8E5> Regards,Jeffery Williams http://www.linkedin.com/in/jwilliamsoh
May 22, 2015 at 2:46 pm
Jeffery Williams (5/22/2015)
I have a strange problem that I have never seen before. I am trying to alter a table and add a field to it and I am greeted with an error that I would exceed maximum row size; when in fact I would not even be remotely close:Here is the statement:
IF NOT EXISTS(SELECT * FROM [addb15].[FSParallel].INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'AccountingPeriod'
AND COLUMN_NAME = 'ImportIdentity') BEGIN EXEC [addb15].[FSParallel].sys.sp_executesql N' ALTER TABLE [dbo].[AccountingPeriod] ADD [ImportIdentity] int' END
Here is the table that I want to alter:
CREATE TABLE [dbo].[AccountingPeriod](
[AccountingPeriodGUID] [uniqueidentifier] NOT NULL CONSTRAINT [DF_AccountingPeriod_AccountingPeriodGUID] DEFAULT (newsequentialid()),
[AccountingPeriodDate] [smalldatetime] NOT NULL CONSTRAINT [DF_AccountingPeriod_AccountingPeriodDate] DEFAULT (getdate()),
[WeekOfMonth] [tinyint] NOT NULL,
[BiWeeklyCycle] [tinyint] NOT NULL,
CONSTRAINT [PK_AccountingPeriod] PRIMARY KEY CLUSTERED
(
[AccountingPeriodGUID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
Here is my error message:
Msg 1701, Level 16, State 1, Line 1
Creating or altering table 'AccountingPeriod' failed because the minimum row size would be 8061, including 8035 bytes of internal overhead. This exceeds the maximum allowable table row size of 8060 bytes.
Does anyone have a clue? I am totally lost how this is happening.
Had to do some mods to the code you posted. Ran the following in a Sandbox database with no issues.
CREATE TABLE [dbo].[AccountingPeriod](
[AccountingPeriodGUID] [uniqueidentifier] NOT NULL CONSTRAINT [DF_AccountingPeriod_AccountingPeriodGUID] DEFAULT (newsequentialid()),
[AccountingPeriodDate] [smalldatetime] NOT NULL CONSTRAINT [DF_AccountingPeriod_AccountingPeriodDate] DEFAULT (getdate()),
[WeekOfMonth] [tinyint] NOT NULL,
[BiWeeklyCycle] [tinyint] NOT NULL,
CONSTRAINT [PK_AccountingPeriod] PRIMARY KEY CLUSTERED
(
[AccountingPeriodGUID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
IF NOT EXISTS(SELECT * FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'AccountingPeriod'
AND COLUMN_NAME = 'ImportIdentity') BEGIN EXEC sys.sp_executesql N' ALTER TABLE [dbo].[AccountingPeriod] ADD [ImportIdentity] int' END
GO
Not sure, but it feels like we are missing something.
May 22, 2015 at 3:47 pm
The byte math in the error message makes sense.
AccountingPeriodGUID - 16
AccountingPeriodDate - 4
WeekOfMonth - 1
BiWeeklyCycle - 1
ImportIdentity - 4
Total - 26
26 bytes + 8035 bytes of internal overhead = 8061
What doesn't make sense is where the huge amount of internal overhead is coming from.
May 22, 2015 at 4:04 pm
Well i created the table, new name. Added column fine. Tried original table still same problem.
Sooooooooo. I drop fk constraint, copy data to new holding table, drop table ; recreate, reinsert data. Add back constraint.
Add column just fine.
So my issue is resolved but I need to find out what happened.
<hr noshade size=1 width=250 color=#BBC8E5> Regards,Jeffery Williams http://www.linkedin.com/in/jwilliamsoh
May 22, 2015 at 4:06 pm
And yes the math works but in the heck is that overhead coming from
<hr noshade size=1 width=250 color=#BBC8E5> Regards,Jeffery Williams http://www.linkedin.com/in/jwilliamsoh
May 22, 2015 at 7:50 pm
As a total swag on my part, try running DBCC CLEANTABLE on the table giving you the problems and see if you still get an error.
--Jeff Moden
Change is inevitable... Change for the better is not.
May 28, 2015 at 11:24 am
Well the CLEANTABLE function is for varchar columns and SUPPOSEDLY does not do anything for int's.
Tried it... He shoots, he scores... It worked.
I considered this however knowing that it does nothing for int's I passed. Well after seeing your post I am like screw it, I have tried everything else!! And sure enough it was the magic bullet.
<hr noshade size=1 width=250 color=#BBC8E5> Regards,Jeffery Williams http://www.linkedin.com/in/jwilliamsoh
May 29, 2015 at 2:26 pm
Very interesting (peculiar) problem, Jeffrey, I'm glad you got it fixed! I would have loved to see some DBCC results on that table.
-----
[font="Arial"]Knowledge is of two kinds. We know a subject ourselves or we know where we can find information upon it. --Samuel Johnson[/font]
July 14, 2015 at 11:56 am
I'd like to know why DBCC CLEANTABLE worked and what it actually cleaned up because it seems like there was some metadata kicking around for that table somewhere that caused the issue and DBCC CLEANTABLE cleaned it up.
Jack Corbett
Consultant - Straight Path Solutions
Check out these links on how to get faster and more accurate answers:
Forum Etiquette: How to post data/code on a forum to get the best help
Need an Answer? Actually, No ... You Need a Question
July 14, 2015 at 12:59 pm
Jeffery Williams (5/28/2015)
Well the CLEANTABLE function is for varchar columns and SUPPOSEDLY does not do anything for int's.Tried it... He shoots, he scores... It worked.
I considered this however knowing that it does nothing for int's I passed. Well after seeing your post I am like screw it, I have tried everything else!! And sure enough it was the magic bullet.
I'd bet $ that at some point in that table's history, it had a large varchar column (e.g. varchar(8000)), and perhaps some other combination of varchar columns that were dropped, and this is the "leftovers" from those operations.
Steve (aka sgmunson) 🙂 🙂 🙂
Rent Servers for Income (picks and shovels strategy)
July 14, 2015 at 1:02 pm
sgmunson (7/14/2015)
Jeffery Williams (5/28/2015)
Well the CLEANTABLE function is for varchar columns and SUPPOSEDLY does not do anything for int's.Tried it... He shoots, he scores... It worked.
I considered this however knowing that it does nothing for int's I passed. Well after seeing your post I am like screw it, I have tried everything else!! And sure enough it was the magic bullet.
I'd bet $ that at some point in that table's history, it had a large varchar column (e.g. varchar(8000)), and perhaps some other combination of varchar columns that were dropped, and this is the "leftovers" from those operations.
That was my thought as well. Methinks I shall try to reproduce this behavior in my home lab tonight.
July 14, 2015 at 2:29 pm
Jeffery Williams (5/28/2015)
Well the CLEANTABLE function is for varchar columns and SUPPOSEDLY does not do anything for int's.Tried it... He shoots, he scores... It worked.
I considered this however knowing that it does nothing for int's I passed. Well after seeing your post I am like screw it, I have tried everything else!! And sure enough it was the magic bullet.
It works on DROPPED varable length columns. So the fact that there are no variable length columns in the table now is neither here nor there. If there used to be one or more variable length column, now dropped but still occupying space dbcc CLEANTABLE make that space unoccupied. Rebuilding the clustered index (if the table has one) will also do that, but does more than just that and costs more.
Tom
Viewing 14 posts - 1 through 13 (of 13 total)
You must be logged in to reply to this topic. Login to reply