June 24, 2011 at 12:55 am
Poor performance, but due to row size (and resultant index depth). All other things being equal (index key, insert pattern), the varchar is more prone to page splits than the char.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
June 24, 2011 at 5:21 am
The point being driven here is that when we store " EXACTLY SAME" string data as char in one table and as varchar in the other table there is a difference in both the space reserved and space used by the data.
As Gail rightly pointed out it would completely depend on the data being inserted in the general scenario.
June 24, 2011 at 7:34 am
Jayanth_Kurup (6/23/2011)
A rule of thumb , the data decides the datatype , performance and storage considerations are secondary , i.e if the data is fixed length use char else varchar if the dta is unicode use nvarchar or nchar , storage and memory usage can be compensated in other ways.Dont let the least relevant aspect dictate the design of the column.
Performance and storage considerations certainly [font="Arial Black"]aren't[/font] the "least relevant aspect" of column design or it would be OK to define all character based columns as NVARCHAR(MAX). 😉
--Jeff Moden
Change is inevitable... Change for the better is not.
June 24, 2011 at 2:40 pm
GilaMonster (6/23/2011)
ankit.shukla1105 (6/23/2011)
The question was to practically demo how char uses up more space than varchar, so here's the demo.I created two tables, VarChar_Table and Char_Table as below:
create table Char_Table (name char(100))
create table VarChar_Table (name varchar(100))
Now insert 100 rows into column 'Name' of Char_Table. Insert SAME 100 values into VarChar_Table.
Now execute the below query:
sp_spaceused 'Char_Table'
go
sp_spaceused 'VarChar_Table'
Notice in the output that for same data in both tables, the space reserved for Char_Table is 24KB and the space reserved for VarChar_Table is 16 KB.
There is the difference we are talking about. Open for corrections.:-D
Completely depends on what data you're inserting. If I insert those 100 rows where that string contains exactly 100 characters, the Varchar will be bigger. If I insert the 100 rows with only 10 characters, the char will be bigger.
Choose appropriate data types based on the data and with thoughts as to storage and performance. Schema is hard (near-impossible) to fix later, you do not want to get it wrong.
What's worth keeping in mind is that the VarChar datatype, in order to facilitate variable length data, (basically) reserves about 2 bytes of overhead per row per value. It's complicated, but you can google details about how SQL Server stored datatypes in the page internally.
So the value 'XX' consumes more physical storage when contained in a VarChar(2) than it does in Char(2). If you are containing something like a single char status code or initial, that has a fixed width, then Char datatype is more storage efficient. In the example below I'm inserting 10,000 rows into 2 tables. The table with Char(2) column reserves about 35% less storage.
set nocount on;
drop table Char2_Table;
create table Char2_Table (inititial char(2) null);
drop table VarChar2_Table;
create table VarChar2_Table (inititial varchar(2) null);
declare @i int; select @i = 1;
while @i <= 10000
begin
insert into Char2_Table (inititial) values ('XX');
insert into VarChar2_Table (inititial) values ('XX');
select @i = @i + 1;
end;
exec sp_spaceused 'Char2_Table';
exec sp_spaceused 'VarChar2_Table';
name rows reserved data index_size unused
------------- ------ ---------- -------- ------------- ---------
Char2_Table 10000 136 KB 120 KB 8 KB 8 KB
VarChar2_Table 10000 200 KB 160 KB 8 KB 32 KB
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
Viewing 4 posts - 16 through 18 (of 18 total)
You must be logged in to reply to this topic. Login to reply