November 10, 2020 at 10:49 pm
Previously we had some issues with deadlocks caused by transaction that happen so fast that the current timestamp column is not accurate enough to prevent a collision. To prevent such an occurrence we need to change the data type used by the timestamp column in order to make it more precise, both for existing tables and going forward.
How do I programmatically update the table column from a datetime to datetime2 datatype for every table in the database. I currently have the following:
declare @schema datetime2
declare @table datetime2
declare @col datetime2
declare @dtype datetime2
declare @sql datetime
declare maxcols cursor for
select
c.TABLE_SCHEMA,
c.TABLE_NAME,
c.COLUMN_NAME,
c.DATA_TYPE
from
INFORMATION_SCHEMA.COLUMNS c
inner join INFORMATION_SCHEMA.TABLES t on
c.TABLE_CATALOG = t.TABLE_CATALOG
and c.TABLE_SCHEMA = t.TABLE_SCHEMA
and c.TABLE_NAME = t.TABLE_NAME
and t.TABLE_TYPE = 'BASE TABLE'
where
c.DATA_TYPE like '%datetime%'
and c.CHARACTER_MAXIMUM_LENGTH = -1
open maxcols
fetch next from maxcols into @schema, @table, @col, @dtype
while @@FETCH_STATUS = 0
begin
set @sql = 'alter table [' + @schema + '].[' + @table +
'] alter column [' + @col + '] ' + @dtype + 'datetime2'
exec sp_executesql @sql
fetch next from maxcols into @schema, @table, @col, @dtype
end
close maxcols
deallocate maxcols
But I get the error "The data types varchar and datetime2 are incompatible in the add operator."
Kris
November 10, 2020 at 11:06 pm
This should generate alter statements without using a cursor.
You will probably need to filter out system tables. Print to see the results before executing. If you can do it reliably in the query, execute it. Otherwise, run the alter statements manually.
DECLARE @sql NVARCHAR(max) = N'';
SELECT @sql = @sql + N'alter table [' + t.TABLE_SCHEMA + N'].[' + t.TABLE_NAME + N'] alter column [' + c.column_name + N'] ' + N'datetime2 ' + CASE c.IS_NULLABLE WHEN 'NO' THEN N'NOT ' ELSE '' END + N'NULL;
'
from INFORMATION_SCHEMA.COLUMNS c
inner JOIN INFORMATION_SCHEMA.TABLES t
on c.TABLE_CATALOG = t.TABLE_CATALOG
and c.TABLE_SCHEMA = t.TABLE_SCHEMA
and c.TABLE_NAME = t.TABLE_NAME
where t.TABLE_TYPE = 'BASE TABLE'
and c.DATA_TYPE = 'datetime'
print @sql;
exec sp_executesql @sql
If there are indexes that reference this column, you will have to drop them before altering, and then recreate the indexes afterward.
If there are foreign keys that reference this column, you will have to either delete them, change the datatype of the referencing column as well, and then recreate the foreign key(s) after altering.
Be aware that you will be locking the table and blocking transactions (or vice versa) while the table is altered.
You used the following in your where clause -- I think you want equality, not like -- you don't need to alter columns that are already datetime2, and presumably wouldn't want to change datetimeoffset to datetime2. If so, use IN ('datetime',datetimeoffset')
c.DATA_TYPE like '%datetime%'
And the following shouldn't be necessary having filtered for datetime datatype:
and c.CHARACTER_MAXIMUM_LENGTH = -1"
(Updated after seeing your example)
November 11, 2020 at 5:13 am
I need to change every table with the data type datetime to datetime2 not just one or two tables so that script won't work and I can't drop any tables.
Kris
November 11, 2020 at 10:27 am
You don't declare your parameters as datetime2, you declare them as strings.
TableName, ColumnName, DataType etc from information schema are strings not datetime2, so you cannot put the value 'TableABC' into a datetime2 datatype object, hence you are getting date type conversion problems
November 11, 2020 at 3:07 pm
I think you need something like this (below). As mentioned by Ratbak, you'll need to drop indexes and foreign keys before making the change and you should definitely run it with the "exec sp_executesql" statement replaced by a "print" statement first to make sure you're only changing what you have to. Also consider the precision you'll need on the datetime2; the default is 7, but if you only need 4 to prevent collisions, you can save a byte per affected row (more if the column is part of an index).
declare @schema nvarchar(128)
declare @table nvarchar(128)
declare @col nvarchar(128)
declare @sql nvarchar(128)
declare maxcols cursor for
select c.TABLE_SCHEMA, c.TABLE_NAME, c.COLUMN_NAME
from INFORMATION_SCHEMA.COLUMNS c
inner join INFORMATION_SCHEMA.TABLES t
on c.TABLE_CATALOG = t.TABLE_CATALOG
and c.TABLE_SCHEMA = t.TABLE_SCHEMA
and c.TABLE_NAME = t.TABLE_NAME
and t.TABLE_TYPE = 'BASE TABLE'
where c.DATA_TYPE = 'datetime' -- No need to change the column if it is already datetime2
open maxcols
fetch next from maxcols into @schema, @table, @col
while @@FETCH_STATUS = 0
begin
set @sql = 'alter table [' + @schema + '].[' + @table + '] alter column [' + @col + '] ' + 'datetime2'
exec sp_executesql @sql
fetch next from maxcols into @schema, @table, @col
end
close maxcols
deallocate maxcols
November 11, 2020 at 9:24 pm
You are also going to need to change every stored procedure and function that declares a variable or parameter as datetime. With that said - I don't think you should do this across every table in your database, nor perform this change all at once across the whole database.
Instead, identify a table that has an issue - plan the change for that table and any procedures, functions, views and indexes that utilize that table and column and update that group of objects. Repeat as needed for any other tables where you have identified an issue with the definition being datetime.
You can then look at related tables and see if changing those columns are required. In general, this shouldn't be needed as you would not normally have tables related on a datetime column because those columns would not store the exact same time.
There is no reason to change a table from datetime to datetime2 unless there is a requirement for a higher precision of time.
And finally, this change will require updating every row in the table - but if you used datetime2(3) it might be performed as an in-place update. If it cannot be performed as an in-place update then you probably will see a large number of page splits and fragmentation which will need the indexes rebuilt after the change is completed.
It might be faster and more efficient to disable all non-clustered indexes, perform the change - and rebuild all indexes.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
November 12, 2020 at 11:21 pm
I get and error similar to
Msg 5074, Level 16, State 1, Line 1
The object 'SAT_DV' is dependent on column 'DV_DATE_TIME'.
Msg 5074, Level 16, State 1, Line 1
The object 'PK_SAT_NAME' is dependent on column 'DV_DATE_TIME'.
Msg 4922, Level 16, State 9, Line 1
ALTER TABLE ALTER COLUMN DV_DATE_TIME failed because one or more objects access this column.
That column is in almost every table. How do I know which table it's talking about and how do I exclude it?
Kris
November 13, 2020 at 4:36 am
You can alter the main while loop and put a 'Try" before the execution step so when the Catch will process on success but stop so you can see the current table details at that point.
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply