May 1, 2008 at 11:24 pm
Easy question.
When using dynamic sql, does the string have to be declared as a nvarchar(4000)?
i only ask this as i know nvarchar's use double the memory and yet it seems to be common to use them when using dynamic sql.
is there a performance impact?
May 2, 2008 at 4:04 am
It doesn't, but it's convenient. If you declare too short string your query will be truncated at execution time.
declare @sql nvarchar(max)
set @sql = 'select * from sys.identity_columns'
exec sp_executesql @sql
go
--this is too short
declare @sql nvarchar(20)
set @sql = 'select * from sys.identity_columns'
exec sp_executesql @sql
HTH
Piotr
...and your only reply is slàinte mhath
May 2, 2008 at 6:46 am
You're in 2005 right? Although I don't advocate using ad hoc queries, if you have to, simply use NVARCHAR(MAX).
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
May 2, 2008 at 7:56 am
Hi Paul,
whenever you want to use datatype length make sure you have entered
2n.
this one also be one of the advantages of performance.
ex:
---
nvarchar(128)
varchar(256)
---
May 2, 2008 at 8:03 am
sqluser (5/2/2008)
Hi Paul,whenever you want to use datatype length make sure you have entered
2n.
this one also be one of the advantages of performance.
ex:
---
nvarchar(128)
varchar(256)
---
Actually - no. You'd want to define the data type lengths the same. It's just that each character takes twice as much space (so - 2*n, not 2^n) to represent/store in a NVarchar construct. So - BEHIND THE SCENES, nvarchar takes up twice as much space, but you don't declare the "useable length" any differently.
If you have 128 characters of data to store - you still need to declare the items as either nvarchar(128) or varchar(128).
----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
May 3, 2008 at 12:39 am
And, I gotta tell you, 2n for CHAR, VARCHAR, and NVARCHAR size doesn't make a bit of difference because row size isn't any multiple of 2n (ie. 8060) and neither are index rows (900).
If you think it does, please produce the Microsoft link that says it does or provide the URL to a site that has a reproducible test that confirms that it does. 😉
--Jeff Moden
Change is inevitable... Change for the better is not.
May 3, 2008 at 12:47 am
Here's the proof...
DROP TABLE jbmtest
go
SELECT TOP 1000000
RowNum = IDENTITY(INT,1,1),
SomeCSV1 = CAST('Part01,Part02,Part03,Part04,Part05,Part06,Part07,Part08,Part09,Part10' AS VARCHAR(80)),
SomeCSV2 = CAST('Part01,Part02,Part03,Part04,Part05,Part06,Part07,Part08,Part09,Part10' AS VARCHAR(128))
INTO dbo.JBMTest
FROM Master.dbo.SysColumns t1,
Master.dbo.SysColumns t2 --Lack of join criteria makes this a CROSS-JOIN
--===== A table is not properly formed unless a Primary Key has been assigned
ALTER TABLE dbo.JBMTest
ADD CONSTRAINT PK_JBMTest_RowNum PRIMARY KEY CLUSTERED (RowNum)
SET STATISTICS TIME ON
UPDATE dbo.JBMTest
SET SomeCSV1 = REPLACE(SomeCSV1,'Part05','PartXX')
UPDATE dbo.JBMTest
SET SomeCSV2 = REPLACE(SomeCSV2,'Part05','PartXX')
SET STATISTICS TIME OFF
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply