November 18, 2015 at 8:51 am
Fernando Margueirat-273899 (11/18/2015)
Thanks for this great tip!One small change that I would make to avoid the "DR.EndDate+1" part and still be able to easily link to dates in other table is to create a new column adding end of day (i.e. 23:59:59.997) to the EndDate of period that have already ended and GETDATE() for current periods (e.g. last month has ended, but month to date is current). This way you can have something like
SELECT b.Period, sales = SUM(a.trn_amt)
FROM T_TRANSACTION a
JOIN V_DATE_RANGE b ON a.trn_dt BETWEEN b.BeginDate AND b.EODEndDate
GROUP BY b.Period
FM
And what about those people who are using DATETIME2?
The absence of evidence is not evidence of absence.
Martin Rees
You can lead a horse to water, but a pencil must be lead.
Stan Laurel
November 18, 2015 at 9:02 am
Phil Parkin (11/18/2015)
Fernando Margueirat-273899 (11/18/2015)
Thanks for this great tip!One small change that I would make to avoid the "DR.EndDate+1" part and still be able to easily link to dates in other table is to create a new column adding end of day (i.e. 23:59:59.997) to the EndDate of period that have already ended and GETDATE() for current periods (e.g. last month has ended, but month to date is current). This way you can have something like
SELECT b.Period, sales = SUM(a.trn_amt)
FROM T_TRANSACTION a
JOIN V_DATE_RANGE b ON a.trn_dt BETWEEN b.BeginDate AND b.EODEndDate
GROUP BY b.Period
FM
And what about those people who are using DATETIME2?
Good point Phil.
I haven't used DATETIME2 yet, so I'm not familiar with its inner working details. Would adding ' 23:59:59.9999999' instead work?
Adding 1 day to the new column will work if you use < and >= instead of BETWEEN. Functionally still the same as using 'DR.EndDate+1' but I would prefer it for clarity. Also, I personally believe that it would be less error prone; people don't have to remember to add the +1 too all queries.
FM
November 18, 2015 at 9:16 am
Fernando Margueirat-273899 (11/18/2015)
Thanks for this great tip!One small change that I would make to avoid the "DR.EndDate+1" part and still be able to easily link to dates in other table is to create a new column adding end of day (i.e. 23:59:59.997) to the EndDate of period that have already ended and GETDATE() for current periods (e.g. last month has ended, but month to date is current). This way you can have something like
SELECT b.Period, sales = SUM(a.trn_amt)
FROM T_TRANSACTION a
JOIN V_DATE_RANGE b ON a.trn_dt BETWEEN b.BeginDate AND b.EODEndDate
GROUP BY b.Period
FM
This is another great idea! Thanks for sharing.
John.
November 18, 2015 at 10:02 am
Nice except the Fiscal Year to Date will not return correct results when the Fiscal Year is not in the same year as the current date. If the current date was 01/01/2016 you would get date range of 2016-10-01 to 2016-01-01 which is not valid. You would want to change it to something like this.
select
Convert(varchar(100),'Fiscal Year To Date') as [Period]
,case when datepart(month,getdate()) < 10 then convert(datetime,'10/1/' + cast(year(dateadd(year, -1, getdate())) as varchar(4)))
else convert(datetime,'10/1/' + cast(year(getdate()) as varchar(4))) end as BeginDate
,convert(datetime,convert(date,getdate())) as EndDate
November 18, 2015 at 10:34 am
rick.foster (11/18/2015)
Nice except the Fiscal Year to Date will not return correct results when the Fiscal Year is not in the same year as the current date. If the current date was 01/01/2016 you would get date range of 2016-10-01 to 2016-01-01 which is not valid. You would want to change it to something like this.select
Convert(varchar(100),'Fiscal Year To Date') as [Period]
,case when datepart(month,getdate()) < 10 then convert(datetime,'10/1/' + cast(year(dateadd(year, -1, getdate())) as varchar(4)))
else convert(datetime,'10/1/' + cast(year(getdate()) as varchar(4))) end as BeginDate
,convert(datetime,convert(date,getdate())) as EndDate
Thanks! I will look into this, and will submit a revision when I have this and the ISO date issue previously identified resolved.
November 18, 2015 at 10:45 am
I remember reading an article that tested this with many thousands of records and there was a material impact. Since then I have always used the more efficient approach as a matter of course.
In this instance though, I think the cost would be immaterial. It's just not called enough to matter.
EDIT: I was referring to the performance cost of the date calculations
November 18, 2015 at 12:34 pm
Great article, I cannot wait to use the concept! I often need to mix & match different ranges of financial data with monthly, quarterly, etc. compared to last year's same numbers, though not in SSRS but in a proprietary reporting engine.
What I think I might do is implement it as a TVF rather than a View, so that I can allow the user to enter a specific seed date rather than getdate().
Thanks for sharing your idea!
November 18, 2015 at 4:21 pm
I just want to chime in with thanks for the great original idea, and everybody else with ideas to make it even better. I can imagine using this some day and look forward to an improved version of the code to steal. 🙂 On that note, I guess my one suggestion would be: for the sample script, if you want to include a header with attribution and maybe a URL or something... I try to respectfully leave a note about the original author in there if I "borrow" code to use elsewhere.
November 19, 2015 at 8:13 am
Awesome article, thank you very much for sharing this.
November 19, 2015 at 11:38 am
Thanks. This is very adaptable to a broad range of applications, locales, and industry.
November 19, 2015 at 11:59 am
Jack (11/18/2015)
I remember reading an article that tested this with many thousands of records and there was a material impact. Since then I have always used the more efficient approach as a matter of course.In this instance though, I think the cost would be immaterial. It's just not called enough to matter.
EDIT: I was referring to the performance cost of the date calculations
I've been a SQL Server developer for a long time, and I've learned over that time that no harm ever comes from making every bit of code you write run as quickly as reasonably possible.
There's also a Zen-like satisfaction to be had after writing optimum code, regardless of its immediate impact on performance.
No one ever got told off because their code ran too fast.
The absence of evidence is not evidence of absence.
Martin Rees
You can lead a horse to water, but a pencil must be lead.
Stan Laurel
November 19, 2015 at 12:12 pm
Phil Parkin (11/19/2015)
Jack (11/18/2015)
I remember reading an article that tested this with many thousands of records and there was a material impact. Since then I have always used the more efficient approach as a matter of course.In this instance though, I think the cost would be immaterial. It's just not called enough to matter.
EDIT: I was referring to the performance cost of the date calculations
I've been a SQL Server developer for a long time, and I've learned over that time that no harm ever comes from making every bit of code you write run as quickly as reasonably possible.
There's also a Zen-like satisfaction to be had after writing optimum code, regardless of its immediate impact on performance.
No one ever got told off because their code ran too fast.
True, but IMHO, unless the improvement in performance is significant, code maintainability is priority over performance. In most of the cases, every minuscule saving in cost of hardware, network bandwidth, user time, etc. goes to the garbage when you have to pay a coder an extra week to find and fix a problem or make an enhancement on messy code.
FM
November 19, 2015 at 12:26 pm
Fernando Margueirat-273899 (11/19/2015)
Phil Parkin (11/19/2015)
Jack (11/18/2015)
I remember reading an article that tested this with many thousands of records and there was a material impact. Since then I have always used the more efficient approach as a matter of course.In this instance though, I think the cost would be immaterial. It's just not called enough to matter.
EDIT: I was referring to the performance cost of the date calculations
I've been a SQL Server developer for a long time, and I've learned over that time that no harm ever comes from making every bit of code you write run as quickly as reasonably possible.
There's also a Zen-like satisfaction to be had after writing optimum code, regardless of its immediate impact on performance.
No one ever got told off because their code ran too fast.
True, but IMHO, unless the improvement in performance is significant, code maintainability is priority over performance. In most of the cases, every minuscule saving in cost of hardware, network bandwidth, user time, etc. goes to the garbage when you have to pay a coder an extra week to find and fix a problem or make an enhancement on messy code.
FM
Hence my use of the phrase 'reasonably possible'. Not 'at all costs'.
And if you think that my proposed amendments to the code in this thread are more complex or difficult to maintain than the originals, I'm afraid I disagree.
The absence of evidence is not evidence of absence.
Martin Rees
You can lead a horse to water, but a pencil must be lead.
Stan Laurel
November 20, 2015 at 7:30 am
One advantage to the varchar solution is that (if it was converted to use ISO data formats), the end dates could have 'T23:59:59.997' added to them so the date ranges actually included the end date rather than excluding it (of course, that fractional part of a second assumes datetime, which should be a big no-no for any columns created in a version that supports datetime2, but with datetime2 you have to match the number of 9's after the decimal to the precision of the datetime2, which is another issue entirely)
November 20, 2015 at 7:39 am
John Meyer-239467 (11/20/2015)
One advantage to the varchar solution is that (if it was converted to use ISO data formats), the end dates could have 'T23:59:59.997' added to them so the date ranges actually included the end date rather than excluding it (of course, that fractional part of a second assumes datetime, which should be a big no-no for any columns created in a version that supports datetime2, but with datetime2 you have to match the number of 9's after the decimal to the precision of the datetime2, which is another issue entirely)
My preference is to make the end date 'midnight on the day after the final day of the period' and then use < in any comparisons.
For example, for 2015
StartDate = '20150101'
EndDate = '20160101'
select from table
where date >= StartDate and date < EndDate
Works with all date data types and keeps things clean.
The absence of evidence is not evidence of absence.
Martin Rees
You can lead a horse to water, but a pencil must be lead.
Stan Laurel
Viewing 15 posts - 16 through 30 (of 50 total)
You must be logged in to reply to this topic. Login to reply