January 12, 2009 at 4:34 am
hi,
I am using SQL Server 2005 . I have created 1000 column table with nvarchar(max) datatype columns .
but when i m going to insert data (only 10 character per column) , the row size limitation error message come. it shows that we can not insert data more than 8060 bytes in one row.
when i investigate this i found i can insert 10 character per column till 360th column . above this message comes.
My question is ,if MS allows us to create 1024 column in one table then why i can not insert such small(10 character per column) data in one row.
Is there any setting for this or some thing else please help me out to solve this problem.
January 12, 2009 at 4:43 am
It is because you are using nvarchar(max) datatype which is effectivly a Text datatype or a blob and these have their own limits.
If you are only inserting 10 characters then you should use a nvarchar(10) or any nvarchar that is less than (4000)
January 12, 2009 at 4:59 am
I don't think that the row-limit size holds for large objects (such as max, text, image etc.), as this was one of the new features in SQL 2K5.
Here is a working example of it...
DECLARE @strSomeChar VARCHAR(MAX)
SELECT@strSomeChar = '0'
IF OBJECT_ID( 'tempdb..#BigTable' ) IS NOT NULL
DROP TABLE #BigTable
CREATE TABLE #BigTable( ABigVarchar VARCHAR(MAX) NOT NULL )
INSERT#BigTable( ABigVarchar )
SELECTREPLICATE( @strSomeChar, 10000 )
SELECTABigVarchar, DATALENGTH( ABigVarchar )
FROM#BigTable
--Ramesh
January 13, 2009 at 12:20 am
hi ,
i tried nvarchar(1000),it also not working.
January 13, 2009 at 9:52 am
Well SQL Server 2005 still must use an 8K data page. There are work arounds as using large datatypes, but you must realize how sql server 2005 handles that. It creates another overflow page but must store a 24 byte pointer to the original page. So if you have columns as defined originally:
24 bytes * 360 ~ > 8K page size.
For your last solution 10*1000 is still greater than the 8K page you most like were able to populate about 75% of the columns before you got the error.
January 13, 2009 at 11:17 am
IIRC an nvarchar(max), and the the old text type storage are 16 bytes a field. 1000 columns would roll past that row size too. Storage is 2 bytes X Number of Chars +2, so the create should bark if you have more than nvarchar(3), and dml will fail if the average field length gets bigger than 3
Any chance to revisit your assumptions and design? any table with 1000 columns sounds rather denormalized to me.
January 13, 2009 at 11:30 am
Any chance to revisit your assumptions and design? any table with 1000 columns sounds rather denormalized to me.
You are being NICE ... any table with 1000 columns is almost for sure a HACK! 🙂
* Noel
January 13, 2009 at 11:31 am
Two things I am wondering about.
The first, why do you have a row defined with 1000 nvarchar(max) columns?
The second, maybe I just can't find it, but I could not find anything in BOL that talks about how SQL Server 2005 would decide if the data in a column should be stored in row, or in other page.
There has to be a decision process within SQL Server to determine when this occurs.
Also, as others pointed out, if every column were stored off page, you don't have evenough space to hold the pointers for all 1000 columns.
January 13, 2009 at 12:39 pm
noeld (1/13/2009)
Any chance to revisit your assumptions and design? any table with 1000 columns sounds rather denormalized to me.
You are being NICE ... any table with 1000 columns is almost for sure a HACK! 🙂
I try to be gentle. I'd hate to have someone say something disparaging. It is possible someone instructed him to store the data in one table. If so, I'd have pushed back or something or used a vertical rather than horizontal design. I couldn't imagine a proper design that called for a 1000 column table. I may just lack imagination.... 😉
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply