August 29, 2018 at 1:57 pm
Hello Experts,
I have a requirement where I need to convert all varchar datatypes in all tables of a database to nvarchar.
Here is the common syntax for thatALTER TABLE [tbl_name] ALTER COLUMN [Column_name] nvarchar (20)
But some of the columns are NULL and Not null which I want to specify during my mass update dynamic script that I wrote below and its not working,(see below script)Select 'ALTER TABLE [' + b.name + '] ALTER COLUMN [' + a.name + '] nvarchar (' + cast(a.max_length as varchar(100)) + '), CASE WHEN [' + cast(a.is_nullable as varchar(100)) + '] = 0 THEN ''NOT NULL'' ELSE ''NULL'' END' -- b.name,c.name,a.max_length,*
FROM sys.columns a JOIN sys.objects b ON a.object_id =b.object_id
JOIN sys.types c ON a.system_type_id = c.system_type_id
Where b.name like 'abc%' and c.name = 'varchar'
order by b.name
Desired Result :- Not NULL or NULL Should be there at end.ALTER TABLE [tbl_name] ALTER COLUMN [Column_name] nvarchar (20) NOT NULL
Thanks a lot.
August 29, 2018 at 2:27 pm
dallas13 - Wednesday, August 29, 2018 1:57 PMHello Experts,
I have a requirement where I need to convert all varchar datatypes in all tables of a database to nvarchar.
Here is the common syntax for thatALTER TABLE [tbl_name] ALTER COLUMN [Column_name] nvarchar (20)
But some of the columns are NULL and Not null which I want to specify during my mass update dynamic script that I wrote below and its not working,(see below script)Select 'ALTER TABLE [' + b.name + '] ALTER COLUMN [' + a.name + '] nvarchar (' + cast(a.max_length as varchar(100)) + '), CASE WHEN [' + cast(a.is_nullable as varchar(100)) + '] = 0 THEN ''NOT NULL'' ELSE ''NULL'' END' -- b.name,c.name,a.max_length,*
FROM sys.columns a JOIN sys.objects b ON a.object_id =b.object_id
JOIN sys.types c ON a.system_type_id = c.system_type_id
Where b.name like 'abc%' and c.name = 'varchar'
order by b.name
Desired Result :- Not NULL or NULL Should be there at end.ALTER TABLE [tbl_name] ALTER COLUMN [Column_name] nvarchar (20) NOT NULL
Thanks a lot.
You need to lookup the definition of the sys views you've used. You'll find your answer there.
--Jeff Moden
Change is inevitable... Change for the better is not.
August 29, 2018 at 3:25 pm
It looks like you just lost track of which bits are supposed to be in the dynamic sql (hence you're writing out the CASE expression instead of evaluating it)
Select 'ALTER TABLE [' + b.name + '] ALTER COLUMN [' + a.name + '] nvarchar (' + cast(a.max_length as varchar(100)) + ') ' + CASE WHEN a.is_nullable = 0 THEN 'NOT NULL' ELSE 'NULL' END -- b.name,c.name,a.max_length,*
FROM sys.columns a JOIN sys.objects b ON a.object_id =b.object_id
JOIN sys.types c ON a.system_type_id = c.system_type_id
Where b.name like 'abc%' and c.name = 'varchar'
order by b.name
August 29, 2018 at 3:55 pm
Does this do what you expect?
DECLARE @AlterSQL NVARCHAR(MAX) = 'ALTER TABLE ~tablename~ ALTER COLUMN ~columnname~ NVARCHAR(~maxlength~) ~nullstate~;'
, @SQLCmd NVARCHAR(MAX)
, @SQLParm NVARCHAR(MAX)
, @TableName NVARCHAR(128)
, @ColumnName NVARCHAR(128)
, @ColumnLen INT
, @NullState NVARCHAR(10);
DECLARE TableCursor CURSOR LOCAL FAST_FORWARD FORWARD_ONLY
FOR
SELECT
[tab].[name] AS TableName
, [col].[name] AS ColumnName
--, [typ].[name] AS DataType
, [col].[max_length] AS MaxLen
, CASE WHEN [col].[is_nullable] = 1 THEN N'NOT ' ELSE N'' END + 'NULL' AS NullState
FROM
[sys].[tables] AS [tab]
INNER JOIN [sys].[columns] AS [col]
ON [col].[object_id] = [tab].[object_id]
INNER JOIN [sys].[types] AS [typ]
ON [typ].[system_type_id] = [col].[system_type_id]
AND [typ].[user_type_id] = [col].[user_type_id]
WHERE
[typ].[name] = N'VARCHAR';
OPEN [TableCursor];
WHILE 1 = 1
BEGIN
FETCH NEXT FROM [TableCursor]
INTO @TableName, @ColumnName, @ColumnLen, @NullState;
IF @@FETCH_STATUS <> 0
BREAK;
SET @SQLCmd = REPLACE(REPLACE(REPLACE(REPLACE(@AlterSQL,N'~tablename~',QUOTENAME(@TableName)),N'~columnname~',QUOTENAME(@ColumnName)),N'~maxlength~',@ColumnLen),N'~nullstate~',@NullState);
PRINT @SQLCmd;
END
CLOSE [TableCursor];
DEALLOCATE [TableCursor];
GO
August 29, 2018 at 4:17 pm
The 900-byte limitation on index row length comes into play too. Each of those modified varchar columns now consume twice the space. Nvarchar(500) = 1,000 bytes long. You can define the index length to be longer than 900 bytes using varchar and nvarchar, but the actual data of any index row cannot be longer than 900 bytes. Indexes lose their usefulness at that size (unless enforcing a UNIQUE constraint that could likely use a little normalization), but they get thrown into databases anyway.
Also, unless row-level compression is already in place, all of the data stored in those varchar columns will be converted to nvarchar when you update the column definition. That will take time, blow up your data pages, and consume a lot of transaction log. Be sure to have log space and an approved maintenance window for the bigger tables. If SQL 2017 works like prior editions, the internal conversion conversion steps taken are:
0. Begin transaction
1. Add the nvarchar definition for the column as a new hidden column in addition to the existing varchar column
2. Copy and convert each value to the new column, usually splitting pages like mad due to all the extra data being placed in them
3. Mark the varchar column as hidden in the table definition, mark the nvarchar column as visible
4. Commit
Note there isn't a step in there that cleans up the stale data from the old column. That will be cleaned on your next clustered index rebuild.
-Eddie
Eddie Wuerch
MCM: SQL
August 29, 2018 at 4:33 pm
Eddie Wuerch - Wednesday, August 29, 2018 4:17 PMAlso beware: nvarchar takes twice the space as varchar (two bytes for each character for nvarchar, one byte per character for varchar). As a result, the widest nvarchar column definition allowed, other than nvarchar(max), is nvarchar(4000).
Varchar columns with a width greater than 4000 (except for varchar(max)) cannot be directly converted to nvarchar. For example, a varchar(5000) column cannot be converted to nvarchar(5000).The 900-byte limitation on index row length comes into play too. Each of those modified varchar columns now consume twice the space. Nvarchar(500) = 1,000 bytes long. You can define the index length to be longer than 900 bytes using varchar and nvarchar, but the actual data of any index row cannot be longer than 900 bytes. Indexes lose their usefulness at that size (unless enforcing a UNIQUE constraint that could likely use a little normalization), but they get thrown into databases anyway.
Also, unless row-level compression is already in place, all of the data stored in those varchar columns will be converted to nvarchar when you update the column definition. That will take time, blow up your data pages, and consume a lot of transaction log. Be sure to have log space and an approved maintenance window for the bigger tables. If SQL 2017 works like prior editions, the internal conversion conversion steps taken are:
0. Begin transaction
1. Add the nvarchar definition for the column as a new hidden column in addition to the existing varchar column
2. Copy and convert each value to the new column, usually splitting pages like mad due to all the extra data being placed in them
3. Mark the varchar column as hidden in the table definition, mark the nvarchar column as visible
4. CommitNote there isn't a step in there that cleans up the stale data from the old column. That will be cleaned on your next clustered index rebuild.
-Eddie
Definitely. Had the same issue where I am working. I had to fight to keep some of the columns varchar because they would never hold unicode data. Also, the code I provided doesn't test for the issue where a varchar column exceeds 4000 bytes.
August 30, 2018 at 7:00 am
Thanks a lot Everyone.
Special Thanks to Eddie. It was great to know before implementing this.
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply