November 29, 2007 at 1:16 am
Hi Brandie, I've been thinking again /* I really got to stop doing that 😉 */
Using the same example of having a varchar(8000) field and only storing 100 bytes on it, does the rest of the datapage remain blank, just incase the field is updated to a value more that 100 bytes?
If not, then when the field is updated to store more data, does the data get stored on the same data page thus moving out all the data from that data page onto another data page?
November 29, 2007 at 2:43 am
When SQL allocates a new extent to a table, it will put as many rows as possible on each data page, up to the limit given in the freespace values. There is no 'spare' space reserved to allow for growth of any varchar field.
If you update a varchar field and that row can no longer fit on the original page, it will be moved to a new page. A pointer is left on the original page to say where the row can now be found.
This behaviour obviously has a performance penalty. Therefore you should think carefully about defining any column as varchar (or varbinary, etc). For small fields maybe up to 40 or 60 characters you should prefer char. For larger fields the implications of having lots of unused disk space (e.g. where only 70 characters of a char(200) are used) outweigh the issues of ow relocation and you should use varchar.
The considerations about using varchar(nnn) or varchar(max) have already been covered very well in this thread.
Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.
When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara
November 29, 2007 at 4:07 am
Ditto to Ed's reply with one clarification. @=)
The 2 byte overhead of VarXXXXX datatypes is basically a pointer (of sorts) that tells the data page that this field can grow if necessary. So, when you update your 8000 varchar field from 100 to 200, the page will squeeze the new data into the row. Now, this might not have immediate page-splitting consequences, but if you update from 100 to 8000, you automatically split the row because that one column takes up a majority of the page space by itself.
And something further to think about, just because your field is "full" doesn't mean you've gotten rid of your overhead. The overhead for the datatype is still there so it can tell the page to shrink if the value of the field ever changes to something smaller.
Just a note, I don't know anyone who has a table with only 1 column that happens to be a varchar(8000). So when I talk about the auto page split above, I'm assuming you've got other columns in the table also.
November 29, 2007 at 7:02 am
Now, for the bad news. Varchar(MAX), NVarchar(Max) and VarBinary(Max) do NOT get stored on the data pages. They are always stored in a different type of page than the rest of your data.
That isn't what the documentation says. The default behavior is to store large value types in the data page if they fit, and use separate out-of-row storage if they don't (or if the "large value types out of row" option is turned on). The previous default behavior for text, ntext, and image was to always use a separate storage area unles the "text in row" option was turned on.
November 29, 2007 at 7:30 am
Microsoft seems to have their documentation mixed up. Here's where I got my information:
Books Online - Index Search - pages [SQL Server], about pages
Page Types:
Data
Data rows with all data, except text, ntext, image, nvarchar(max), varchar(max), varbinary(max), and xml data, when text in row is set to ON.
Index
Index entries.
Text/Image
Large object data types:
text, ntext, image, nvarchar(max), varchar(max), varbinary(max), and xml data
Variable length columns when the data row exceeds 8 KB:
varchar, nvarchar, varbinary, and sql_variant
November 30, 2007 at 7:19 pm
I see VARCHAR(MAX) mainly for stored procedure/function parameters and local variables, not for database fields.
It is possible that a future SQL Server would allow page sizes greater than 8KB. This is the real limit on VARCHAR, and how it differs from TEXT/IMAGE. In which case VARCHAR(MAX) would automatically let you store more text, but would never place it in a separate TEXT/IMAGE page which can happen when you use the "text in row" option.
They could also make the VARCHAR(MAX) be the same as the TEXT type. They're half way there with the data overflow pages.
David Lathrop
DBA
WA Dept of Health
September 22, 2009 at 6:39 am
Hi guys,
my reason for googling about varchar(max) and varchar(8000) is daclaration of variables.
Is there any performance effect when using (max) in scalar functions or table-valued functions?
There have been server performance issues since we made some changes to our codes.
We use functions extensivlesly. Some of our functions have variables desclared with varchar(max).
Could you plz help me?
September 22, 2009 at 8:07 am
FYI: It's common courtesy to start a new thread when you have question that is different from the intent of the original thread. Please don't piggy back.
You do understand the memory space and overhead requirements for the varchar datatypes, yes? If not, look it up in Books Online. There's quite a difference between varchar(8000) and varchar(MAX). And you shouldn't be using either if you can avoid it. Every bite of a variable length datatype that you use where you don't need it adds unnecessary performance hits to all functions, procs and ad hoc queries.
To prove the point, of course, you'll need to adjust your code and test extensively. I recommend using Profiler and examining your query execution plans so you can get a good idea of just how much of a performance hit you're getting.
Viewing 8 posts - 16 through 22 (of 22 total)
You must be logged in to reply to this topic. Login to reply