March 18, 2014 at 12:04 am
Comments posted to this topic are about the item Creating a Date Range from Multiple Rows Based on a Single Date
My thought question: Have you ever been told that your query runs too fast?
My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.
Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
Since random numbers are too important to be left to chance, let's generate some![/url]
Learn to understand recursive CTEs by example.[/url]
[url url=http://www.sqlservercentral.com/articles/St
March 18, 2014 at 3:16 am
Nice article, Dwain !
:smooooth:
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
March 18, 2014 at 4:19 am
Johan,
Thanks and glad you like it.
As a little additional background, one of the things that led me to write this (besides people on the forums always asking the question), is some systems that I've seen where master data tables have a column that they call "IsActive." So when something is about to be deactivated, they create a new record (as inactive) and on the day that the master data row changes they flip the flag settings.
Now, think about doing that when you've got to activate 100 price lists on the same day. No need for a script. Just set up the new pricelists in advance to be active on the effective date.
My thought question: Have you ever been told that your query runs too fast?
My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.
Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
Since random numbers are too important to be left to chance, let's generate some![/url]
Learn to understand recursive CTEs by example.[/url]
[url url=http://www.sqlservercentral.com/articles/St
March 18, 2014 at 7:47 am
Dwain,
I liked your analysis of the different options for building a resultset that included both an in-effect and an out-of-effect date, BUT I think you missed the boat on the final goal of showing the price for each product as of a given date and time. You are essentially looking for the record that has the greatest in-effect date that is less than or equal to the given date and time and I believe the following query will do what you want in a simpler manner:
DECLARE @DateOfInterest DATETIME = '2013-07-31 15:22';
SELECT [p].[ProductID], [p].[ProductDescription], [w].[ProductPrice]
FROM [dbo].[Products] AS p
INNER JOIN (
SELECT [ProductID], [EffectiveStartDT], [ProductPrice]
, ROW_NUMBER() OVER (PARTITION BY [ProductID] ORDER BY [ProductID], [EffectiveStartDT] DESC) AS [RowNo]
FROM [dbo].[ProductPrices]
WHERE [EffectiveStartDT] <= @DateOfInterest
) AS w
ON [w].[ProductID] = [p].[ProductID]
AND [w].[RowNo] = 1
ORDER BY [w].[ProductID];
I do not believe the out-of-effect date you worked so hard to determine provides any value in finding the correct price. It appears that you have fallen into the trap that many developers fall into (including myself!) of making a query more complex than it needs to be to solve the business need because we think we have found a really cool technique and want to use it. I can't tell you how many times I have gone back to look at code I wrote a year or two ago and wondered why I used complex code to do something that could have been done in a more simple and straightforward way.
Still, it was a very interesting and good article!
March 18, 2014 at 8:09 am
Nice Query Paul ,
You just do not need to include the ProductId ine order by of the over clause as you're doing partions by Productid 🙂
Dwain, thanks for your article, any way you've provided a nice comparaison between new window functions and traditional approches 🙂
March 18, 2014 at 8:15 am
Something similar to this with a couple of added complications came up the other day. Took two of us a couple hours to do with a self-join. Wish we were on 2012 as this would have been a lot easier to solve.
Nice article.
March 18, 2014 at 9:58 am
We have always just used an Ending Date column which is only null on the last record. It's nice to see other solutions to this common problem. Great job on the analysis! I have been looking for a compelling reason to upgrade our production server to 2012, and will add this to the pros list. Thanks!
March 18, 2014 at 10:48 am
Nice demo. Thanks for taking the time and effort to publish this.
I used to use the ROW_NUMBER method in order to historically create type-2 slowly changing dimensions in an environment that had reliable audit tables. It appears as though "LEAD/LAG" can improve the peformance given the right circumstances.
An even BIGGER CHALLENGE arises when more than one audit table (with effective/change dates) exists as a source for a dimension--in which case I sometimes vertically stack all effective dates from the multiple source tables on top of one another.
March 18, 2014 at 5:47 pm
paul.s.lach (3/18/2014)
Dwain,I liked your analysis of the different options for building a resultset that included both an in-effect and an out-of-effect date, BUT I think you missed the boat on the final goal of showing the price for each product as of a given date and time. You are essentially looking for the record that has the greatest in-effect date that is less than or equal to the given date and time and I believe the following query will do what you want in a simpler manner:
DECLARE @DateOfInterest DATETIME = '2013-07-31 15:22';
SELECT [p].[ProductID], [p].[ProductDescription], [w].[ProductPrice]
FROM [dbo].[Products] AS p
INNER JOIN (
SELECT [ProductID], [EffectiveStartDT], [ProductPrice]
, ROW_NUMBER() OVER (PARTITION BY [ProductID] ORDER BY [ProductID], [EffectiveStartDT] DESC) AS [RowNo]
FROM [dbo].[ProductPrices]
WHERE [EffectiveStartDT] <= @DateOfInterest
) AS w
ON [w].[ProductID] = [p].[ProductID]
AND [w].[RowNo] = 1
ORDER BY [w].[ProductID];
I do not believe the out-of-effect date you worked so hard to determine provides any value in finding the correct price. It appears that you have fallen into the trap that many developers fall into (including myself!) of making a query more complex than it needs to be to solve the business need because we think we have found a really cool technique and want to use it. I can't tell you how many times I have gone back to look at code I wrote a year or two ago and wondered why I used complex code to do something that could have been done in a more simple and straightforward way.
Still, it was a very interesting and good article!
In the article, did I say:
"We have four ways we can solve this in SQL Server 2012."
I meant to say:
"We have at least four ways we can solve this in SQL Server 2012."
In the end, my article is all about the performance the different methods, so I'm curious how yours stacks up against the alternatives for the 3 indexing scenarios. I don't have a lot of time available to try it at the moment but I will and report back after I do.
Thanks for the contribution!
My thought question: Have you ever been told that your query runs too fast?
My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.
Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
Since random numbers are too important to be left to chance, let's generate some![/url]
Learn to understand recursive CTEs by example.[/url]
[url url=http://www.sqlservercentral.com/articles/St
March 18, 2014 at 5:50 pm
funreal7/brendan woulfe/Kenneth J. Moore/sneumersky,
Thanks to all for taking the time to read and comment. What a great community we have here when everyone takes a moment to contribute.
My thought question: Have you ever been told that your query runs too fast?
My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.
Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
Since random numbers are too important to be left to chance, let's generate some![/url]
Learn to understand recursive CTEs by example.[/url]
[url url=http://www.sqlservercentral.com/articles/St
March 19, 2014 at 9:07 am
Nice article but I have a few remarks. If you want to have more fun look for (bi) temporal databases.
Adding a column EffectiveEndDT (and filling it) is very easy.
Don't make the end date nullable. Put coalesce around it and give it a date like 9999-12-31
If the enddate is not nullable, cross apply can be used.
Have you checked out the database option "DATE_CORRELATION_OPTIMIZATION"
March 19, 2014 at 10:03 am
Yes, as a BI practitioner, I have a special place in my heart for DATE_CORRELATION_OPTIMIZATION. 🙂
March 24, 2015 at 6:29 pm
Thanks for this article. I had created views to do something similar for several tables. But your solutions are simpler than what I did.
Now I'm hoping to get time to change some of those views and see if I can improve the speed.
March 24, 2015 at 8:00 pm
marcia.j.wilson (3/24/2015)
Thanks for this article. I had created views to do something similar for several tables. But your solutions are simpler than what I did.Now I'm hoping to get time to change some of those views and see if I can improve the speed.
I hope it ends up being helpful to you Marcia!
You may also want to check this out as it is on a similar topic:
Identifying Start Dates not Aligned with End Dates on a Prior Row [/url]
My thought question: Have you ever been told that your query runs too fast?
My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.
Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
Since random numbers are too important to be left to chance, let's generate some![/url]
Learn to understand recursive CTEs by example.[/url]
[url url=http://www.sqlservercentral.com/articles/St
Viewing 14 posts - 1 through 13 (of 13 total)
You must be logged in to reply to this topic. Login to reply