September 5, 2014 at 8:31 am
Hi I'm importing dates into a table with Bulk insert
SET DATEFORMAT DMY
it works with dates e.g. "14/01/2009"
However sometimes I get dates in the format
"Fri 14/01/2009"
What is the best way to convert these.
I can only think of putting them in a staging table with all date fields as varchars
Then updating these varchar fields
LTRIM(REPLACE(dtField, 'Mon', ''))
LTRIM(REPLACE(dtField, 'Tues', ''))
for all 7 days of the week and hope no foreign dates come in (very unlikely)
Anything better more robust?
September 5, 2014 at 9:05 am
Try this
set dateformat dmy
declare @Date varchar(max) = 'Fri 14/01/2009'
declare @processed date= reverse(left(reverse(@date),10))
How to post a question to get the most help http://www.sqlservercentral.com/articles/Best+Practices/61537
September 5, 2014 at 9:05 am
terry999 (9/5/2014)
Hi I'm importing dates into a table with Bulk insertSET DATEFORMAT DMY
it works with dates e.g. "14/01/2009"
However sometimes I get dates in the format
"Fri 14/01/2009"
What is the best way to convert these.
I can only think of putting them in a staging table with all date fields as varchars
Then updating these varchar fields
LTRIM(REPLACE(dtField, 'Mon', ''))
LTRIM(REPLACE(dtField, 'Tues', ''))
for all 7 days of the week and hope no foreign dates come in (very unlikely)
Anything better more robust?
To be honest, I almost never bulk load data directly into the final table. I usually load into a staging table to do validations and data cleansing to avoid the type of unexpected problems that you've experienced. The only time I may "go direct" is when loading large reference tables from guaranteed sources that have been previously negotiated with the data provider. Even then, I'll sometimes still go the staging table route just because no one is perfect and mistakes can and do happen.
To wit, for Bulk Imports, I consider staging table/validation route to be the more "robust" solution.
--Jeff Moden
Change is inevitable... Change for the better is not.
September 5, 2014 at 9:07 am
Are you using a format file? How would you handle invalid dates?
A staging table is a good idea for any data load process. That way you don't mess with already validated data. You could use a computed column to validate the input based on the string date.
I'd need more information to give you a better advice.
September 5, 2014 at 9:12 am
You could skip all non-numeric chars at the front of the string, viz:
SELECT
date_input,
SUBSTRING(date_input, PATINDEX('%[0-9]%', date_input + '0'), 30) AS adjusted_date_input
FROM (
SELECT 'Fri 14/01/2009' AS date_input UNION ALL
SELECT '23/12/2008' UNION ALL
SELECT 'Wednesday06/03/2014'
) AS test_data
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
September 5, 2014 at 9:15 am
Point taken, up until now all CSV have been generated by running queries on remote SQL servers.
The stuff I'm dealing with now, I'm not actually sure of how it is produced, so it will be staging. In that way I will have a complete easy to read list of what failed to import
September 5, 2014 at 9:21 am
Thanks
My func will look like something like this
set dateformat dmy
declare @Date varchar(max) = ' Fri 14/01/2009'
declare @processed date
declare @iPos int = charindex(' ',@date,1)
SET @processed = RIGHT(@Date,LEN(@Date) - @iPos)
select @processed
My fault, I didn't check before, I've since looked at more data and I have to deal with
'Fri 14/01/2009'
'14/1/2009'
'14/1/9'
September 5, 2014 at 9:37 am
Nice one..
Why date_input + '0'
I think it works without concatenating '0' at the end?
September 5, 2014 at 9:40 am
terry999 (9/5/2014)
Nice one..Why date_input + '0'
I think it works without concatenating '0' at the end?
That's in case you get an entry like:
'Saturday'
Without the " + '0'", the entire string gets returned; with it, an empty string, which is more valid in this context.
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
September 5, 2014 at 9:50 am
terry999 (9/5/2014)
ThanksMy func will look like something like this
Please tell me that you're not creating a scalar function for this. That'll cause performance problems.
September 5, 2014 at 12:26 pm
Nice function, Scott, but I would still go with a staging table...always.
OP: Everyone makes mistakes and some people actually still do things manually in this day and age. Specific to this instance, there are so many ways to screw with dates that it almost makes my head spin. There may be a case where I didn't feel it necessary to use a staging table, but I haven't found it yet.
September 5, 2014 at 2:39 pm
Okay, I had to play with this for a bit. Try this out in an empty sandbox database, but before you do you may need to create the DelimitedSplit8K function which I have also posted (be sure to read all the comments in the code).
set dateformat dmy;
if object_id('tempdb..#TestData') is not null drop table #TestData;
create table #TestData(tdid int identity(1,1), StringDate varchar(30));
insert into #TestData(StringDate)
SELECT 'Fri 14/01/2009' AS date_input UNION ALL
SELECT '23/12/2008' UNION ALL
SELECT 'Wednesday06/03/2014' UNION ALL
SELECT '14/1/9' UNION ALL
SELECT 'Wednesday';
go
select * from #TestData;
go
if object_id('dbo.fnGetDateStr') is not null drop function dbo.fnGetDateStr;
go
create function dbo.fnGetDateStr(@pDateString varchar(30))
returns table with schemabinding
as
return(select substring(@pDateString,patindex('%[0-9]%',@pDateString + '0'),len(@pDateString)) DateString);
go
select
*
from
#TestData
cross apply dbo.fnGetDateStr(StringDate) gts
cross apply (select stuff((select right('0' + nullif(ds8k.Item,''), case ca1.dpart when 'y' then 4 else 2 end)
from
dbo.DelimitedSplit8K(gts.DateString,'/') ds8k
inner join (
select
substring(er.date_format,ca.n,1) dpart,
ca.n
from
sys.dm_exec_requests er
cross apply (select n from (values(1),(2),(3))dt(n)) ca
where
er.session_id = @@SPID) ca1
on (ds8k.ItemNumber = ca1.n)
order by
ca1.dpart desc
for xml path(''),TYPE).value('.','varchar(8)'),1,0,'') StringDate) ca;
go
with BaseData as (
select
td.tdid,
td.StringDate OriginalStringDate,
gts.DateString,
ca.StringDate
from
#TestData td
cross apply dbo.fnGetDateStr(StringDate) gts
cross apply (select stuff((select right('0' + nullif(ds8k.Item,''), case ca1.dpart when 'y' then 4 else 2 end)
from
dbo.DelimitedSplit8K(gts.DateString,'/') ds8k
inner join (
select
substring(er.date_format,ca.n,1) dpart,
ca.n
from
sys.dm_exec_requests er
cross apply (select n from (values(1),(2),(3))dt(n)) ca
where
er.session_id = @@SPID) ca1
on (ds8k.ItemNumber = ca1.n)
order by
ca1.dpart desc
for xml path(''),TYPE).value('.','varchar(8)'),1,0,'') StringDate) ca
)
select
*,
cast(StringDate as date) DateValue
from
BaseData;
go
DelimitedSplit8K function:
USE [Sandbox]
GO
/****** Object: UserDefinedFunction [dbo].[DelimitedSplit8K] Script Date: 9/6/2014 00:02:18 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE FUNCTION [dbo].[DelimitedSplit8K]
/**********************************************************************************************************************
Purpose:
Split a given string at a given delimiter and return a list of the split elements (items).
Notes:
1. Leading a trailing delimiters are treated as if an empty string element were present.
2. Consecutive delimiters are treated as if an empty string element were present between them.
3. Except when spaces are used as a delimiter, all spaces present in each element are preserved.
Returns:
iTVF containing the following:
ItemNumber = Element position of Item as a BIGINT (not converted to INT to eliminate a CAST)
Item = Element value as a VARCHAR(8000)
Statistics on this function may be found at the following URL:
http://www.sqlservercentral.com/Forums/Topic1101315-203-4.aspx
CROSS APPLY Usage Examples and Tests:
--=====================================================================================================================
-- TEST 1:
-- This tests for various possible conditions in a string using a comma as the delimiter. The expected results are
-- laid out in the comments
--=====================================================================================================================
--===== Conditionally drop the test tables to make reruns easier for testing.
-- (this is NOT a part of the solution)
IF OBJECT_ID('tempdb..#JBMTest') IS NOT NULL DROP TABLE #JBMTest
;
--===== Create and populate a test table on the fly (this is NOT a part of the solution).
-- In the following comments, "b" is a blank and "E" is an element in the left to right order.
-- Double Quotes are used to encapsulate the output of "Item" so that you can see that all blanks
-- are preserved no matter where they may appear.
SELECT *
INTO #JBMTest
FROM ( --# & type of Return Row(s)
SELECT 0, NULL UNION ALL --1 NULL
SELECT 1, SPACE(0) UNION ALL --1 b (Empty String)
SELECT 2, SPACE(1) UNION ALL --1 b (1 space)
SELECT 3, SPACE(5) UNION ALL --1 b (5 spaces)
SELECT 4, ',' UNION ALL --2 b b (both are empty strings)
SELECT 5, '55555' UNION ALL --1 E
SELECT 6, ',55555' UNION ALL --2 b E
SELECT 7, ',55555,' UNION ALL --3 b E b
SELECT 8, '55555,' UNION ALL --2 b B
SELECT 9, '55555,1' UNION ALL --2 E E
SELECT 10, '1,55555' UNION ALL --2 E E
SELECT 11, '55555,4444,333,22,1' UNION ALL --5 E E E E E
SELECT 12, '55555,4444,,333,22,1' UNION ALL --6 E E b E E E
SELECT 13, ',55555,4444,,333,22,1,' UNION ALL --8 b E E b E E E b
SELECT 14, ',55555,4444,,,333,22,1,' UNION ALL --9 b E E b b E E E b
SELECT 15, ' 4444,55555 ' UNION ALL --2 E (w/Leading Space) E (w/Trailing Space)
SELECT 16, 'This,is,a,test.' --E E E E
) d (SomeID, SomeValue)
;
--===== Split the CSV column for the whole table using CROSS APPLY (this is the solution)
SELECT test.SomeID, test.SomeValue, split.ItemNumber, Item = QUOTENAME(split.Item,'"')
FROM #JBMTest test
CROSS APPLY dbo.DelimitedSplit8K(test.SomeValue,',') split
;
--=====================================================================================================================
-- TEST 2:
-- This tests for various "alpha" splits and COLLATION using all ASCII characters from 0 to 255 as a delimiter against
-- a given string. Note that not all of the delimiters will be visible and some will show up as tiny squares because
-- they are "control" characters. More specifically, this test will show you what happens to various non-accented
-- letters for your given collation depending on the delimiter you chose.
--=====================================================================================================================
WITH
cteBuildAllCharacters (String,Delimiter) AS
(
SELECT TOP 256
'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789',
CHAR(ROW_NUMBER() OVER (ORDER BY (SELECT NULL))-1)
FROM master.sys.all_columns
)
SELECT ASCII_Value = ASCII(c.Delimiter), c.Delimiter, split.ItemNumber, Item = QUOTENAME(split.Item,'"')
FROM cteBuildAllCharacters c
CROSS APPLY dbo.DelimitedSplit8K(c.String,c.Delimiter) split
ORDER BY ASCII_Value, split.ItemNumber
;
-----------------------------------------------------------------------------------------------------------------------
Other Notes:
1. Optimized for VARCHAR(8000) or less. No testing or error reporting for truncation at 8000 characters is done.
2. Optimized for single character delimiter. Multi-character delimiters should be resolvedexternally from this
function.
3. Optimized for use with CROSS APPLY.
4. Does not "trim" elements just in case leading or trailing blanks are intended.
5. If you don't know how a Tally table can be used to replace loops, please see the following...
http://www.sqlservercentral.com/articles/T-SQL/62867/
6. Changing this function to use NVARCHAR(MAX) will cause it to run twice as slow. It's just the nature of
VARCHAR(MAX) whether it fits in-row or not.
7. Multi-machine testing for the method of using UNPIVOT instead of 10 SELECT/UNION ALLs shows that the UNPIVOT method
is quite machine dependent and can slow things down quite a bit.
-----------------------------------------------------------------------------------------------------------------------
Credits:
This code is the product of many people's efforts including but not limited to the following:
cteTally concept originally by Iztek Ben Gan and "decimalized" by Lynn Pettis (and others) for a bit of extra speed
and finally redacted by Jeff Moden for a different slant on readability and compactness. Hat's off to Paul White for
his simple explanations of CROSS APPLY and for his detailed testing efforts. Last but not least, thanks to
Ron "BitBucket" McCullough and Wayne Sheffield for their extreme performance testing across multiple machines and
versions of SQL Server. The latest improvement brought an additional 15-20% improvement over Rev 05. Special thanks
to "Nadrek" and "peter-757102" (aka Peter de Heer) for bringing such improvements to light. Nadrek's original
improvement brought about a 10% performance gain and Peter followed that up with the content of Rev 07.
I also thank whoever wrote the first article I ever saw on "numbers tables" which is located at the following URL
and to Adam Machanic for leading me to it many years ago.
-----------------------------------------------------------------------------------------------------------------------
Revision History:
Rev 00 - 20 Jan 2010 - Concept for inline cteTally: Lynn Pettis and others.
Redaction/Implementation: Jeff Moden
- Base 10 redaction and reduction for CTE. (Total rewrite)
Rev 01 - 13 Mar 2010 - Jeff Moden
- Removed one additional concatenation and one subtraction from the SUBSTRING in the SELECT List for that tiny
bit of extra speed.
Rev 02 - 14 Apr 2010 - Jeff Moden
- No code changes. Added CROSS APPLY usage example to the header, some additional credits, and extra
documentation.
Rev 03 - 18 Apr 2010 - Jeff Moden
- No code changes. Added notes 7, 8, and 9 about certain "optimizations" that don't actually work for this
type of function.
Rev 04 - 29 Jun 2010 - Jeff Moden
- Added WITH SCHEMABINDING thanks to a note by Paul White. This prevents an unnecessary "Table Spool" when the
function is used in an UPDATE statement even though the function makes no external references.
Rev 05 - 02 Apr 2011 - Jeff Moden
- Rewritten for extreme performance improvement especially for larger strings approaching the 8K boundary and
for strings that have wider elements. The redaction of this code involved removing ALL concatenation of
delimiters, optimization of the maximum "N" value by using TOP instead of including it in the WHERE clause,
and the reduction of all previous calculations (thanks to the switch to a "zero based" cteTally) to just one
instance of one add and one instance of a subtract. The length calculation for the final element (not
followed by a delimiter) in the string to be split has been greatly simplified by using the ISNULL/NULLIF
combination to determine when the CHARINDEX returned a 0 which indicates there are no more delimiters to be
had or to start with. Depending on the width of the elements, this code is between 4 and 8 times faster on a
single CPU box than the original code especially near the 8K boundary.
- Modified comments to include more sanity checks on the usage example, etc.
- Removed "other" notes 8 and 9 as they were no longer applicable.
Rev 06 - 12 Apr 2011 - Jeff Moden
- Based on a suggestion by Ron "Bitbucket" McCullough, additional test rows were added to the sample code and
the code was changed to encapsulate the output in pipes so that spaces and empty strings could be perceived
in the output. The first "Notes" section was added. Finally, an extra test was added to the comments above.
Rev 07 - 06 May 2011 - Peter de Heer, a further 15-20% performance enhancement has been discovered and incorporated
into this code which also eliminated the need for a "zero" position in the cteTally table.
**********************************************************************************************************************/
--===== Define I/O parameters
(@pString VARCHAR(8000), @pDelimiter CHAR(1))
RETURNS TABLE WITH SCHEMABINDING AS
RETURN
--===== "Inline" CTE Driven "Tally Table" produces values from 0 up to 10,000...
-- enough to cover NVARCHAR(4000)
WITH E1(N) AS (
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1
), --10E+1 or 10 rows
E2(N) AS (SELECT 1 FROM E1 a, E1 b), --10E+2 or 100 rows
E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10E+4 or 10,000 rows max
cteTally(N) AS (--==== This provides the "base" CTE and limits the number of rows right up front
-- for both a performance gain and prevention of accidental "overruns"
SELECT TOP (ISNULL(DATALENGTH(@pString),0)) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E4
),
cteStart(N1) AS (--==== This returns N+1 (starting position of each "element" just once for each delimiter)
SELECT 1 UNION ALL
SELECT t.N+1 FROM cteTally t WHERE SUBSTRING(@pString,t.N,1) = @pDelimiter
),
cteLen(N1,L1) AS(--==== Return start and length (for use in substring)
SELECT s.N1,
ISNULL(NULLIF(CHARINDEX(@pDelimiter,@pString,s.N1),0)-s.N1,8000)
FROM cteStart s
)
--===== Do the actual split. The ISNULL/NULLIF combo handles the length for the final element when no delimiter is found.
SELECT ItemNumber = ROW_NUMBER() OVER(ORDER BY l.N1),
Item = SUBSTRING(@pString, l.N1, l.L1)
FROM cteLen l;
GO
September 6, 2014 at 4:33 am
Thank you Lynn
(I have a directory "UseFullSQL" it has made it into there )
I've had a quick go at it and it does what I want. I've imported my data using the previous technique. I'm now validating/fixing before I push the whole DB live
That split function is precisely what I was looking for something unrelated.
To my annoyance I have to deal with many DBs that have non atomic fields. The better ones are delimited and that split function is a marvel.
Before I use the split I have to get my head around it.
Thank you to all who posted.
September 7, 2014 at 1:52 pm
To help you with the DelimitedSplit8K function, here are two articles to get you started.
1. http://www.sqlservercentral.com/articles/T-SQL/62867/
2. http://www.sqlservercentral.com/articles/Tally+Table/72993/
Start with the first one to get a handle on the Tally table and then move on to the second one. Before you begin, be forewarned that they might very well change the way you look at and think about data. They cover more than just string splitting. Some call them the basis of "SQL Black Ops" but I think they're plain cool. You will find the performance to be impressive. Enjoy and have fun with the testing and playing that comes afterwards.
Viewing 15 posts - 1 through 14 (of 14 total)
You must be logged in to reply to this topic. Login to reply