Max row size in SQL Server

  • I need two varchar(8000) columns in a table. But I can't because the max row length is 8032 chars. Is there a workaround?

    Thanks

  • A workaround would be to vertical partitioning into two tables and relate them in a 1 to 1 relationship.

    Frank

    http://www.insidesql.de

    http://www.familienzirkus.de

    --
    Frank Kalis
    Microsoft SQL Server MVP
    Webmaster: http://www.insidesql.org/blogs
    My blog: http://www.insidesql.org/blogs/frankkalis/[/url]

  • you can actually create a table with varchars exceeding the maximum page size. But you won't be able to fill them beyond 8032 bytes.

    workaround1: previous answer (Frank Kalis)

    workaround2: use Oracle, DB2, ...

  • Would a text column fit the bill here? There may be performance considerations to consider.

  • Hm...after posting my answer I thought that maybe you should review your table design once more. Is there really a need for two column of that size?

    Frank

    http://www.insidesql.de

    http://www.familienzirkus.de

    --
    Frank Kalis
    Microsoft SQL Server MVP
    Webmaster: http://www.insidesql.org/blogs
    My blog: http://www.insidesql.org/blogs/frankkalis/[/url]

  • Franks idea will work as long as you do not use any funtion that would cause sorting (order by, group by) as these generate temp tables for the sorting process and it too is subject to the limit. I personally dislike the two column just cannot exceed total between them size method becuase you eventually will. Text columns would be the better choice.

  • Another alternative would be to put all of the data in one varchar(8000) column, but add a second int column to distinguish the types of data. Thus, data of type 1 is really column 1 and data of type 2 is really column 2, although it's all stored in the same column.

    But I agree with Frank in that I find it difficult to believe a good design would require two varchar(8000) columns. Perhaps that data in the column itself might be split up and normalized?

  • Another alternative would be to put all of the data in one varchar(8000) column, but add a second int column to distinguish the types of data. Thus, data of type 1 is really column 1 and data of type 2 is really column 2, although it's all stored in the same column.

    But I agree with Frank in that I find it difficult to believe a good design would require two varchar(8000) columns. Perhaps that data in the column itself might be split up and normalized?

  • hi!

    just an addition to your discussion: aggreeing with frank, i'd not suggest using VARCHAR columns that big at all. you should simply use datatype "TEXT" (physically it is stored differently).

    if you need features like grouping and so on on those columns, i'd stronly suggest reviewing your DB model. for fast searching, with some limitations, full text indexes are a quite good solution.

    even if you want to store large binaries (eg. Word), you can fulltext index them, if you store the file type!

    best regards,

    chris.

  • Antares, do you mean if I join a bunch of tables together that collectively exceed the row limit & try to sort them it wouldn't work?

    This doesn't seem right. Or do you mean only if I first combined rows (String Concatenation)?

    Data: Easy to spill, hard to clean up!

  • quote:


    Antares, do you mean if I join a bunch of tables together that collectively exceed the row limit & try to sort them it wouldn't work?

    This doesn't seem right. Or do you mean only if I first combined rows (String Concatenation)?


    If you create a join of tables with a column list that will create a resultset row width of greater than 8k and have a ORDER BY in it it will throw and error. But seems to only if the TempDB is used. What happens on some ordering functions is that a temp table is created to do the sorting and when this is done the temp table is subject to the 8k limit. I don't have a good example around to demonstrate on-hand.

  • So it doesn't always create a temp table for ordering - does it depend on the execution plan?

    Thanks - It's good to keep in mind if a query that returns large row results is failing, it could be this issue.

    Data: Easy to spill, hard to clean up!

  • As another alternative, why not just define the column as a Text column? As long as your stored data size won't grow into the multi-meg's (doesn't sound like it would, based on your usage), you shouldn't have any problem as far as basic types of operations go. For example:

    Use Tempdb

    go

    create table test1 (

    C1 int,

    C2 text )

    insert test1 values (1, 'this is a line of text.')

    select * from test1 where c2 like 'this%'

Viewing 13 posts - 1 through 12 (of 12 total)

You must be logged in to reply to this topic. Login to reply