January 22, 2016 at 11:52 am
have a condition ' where year = 2015' now i want change rolling to 12 months i am adding something like this "year =dateadd(mm,datediff(mm,0,getdate())-12,0)" which is not working i guess i am not doing properly I need some help here!!
If i dont have the Year condition i am getting rows from 2015 and 2016 Exp of data as follows:
Month Yearmm release
Jan 201401 1
Feb 201402 2
so on....
Jan 201501 1
Feb 201502 2
Mar 201503 1
so on....
Jan 201601 1
January 22, 2016 at 11:56 am
Hi
read the following on how to give us relevant details to respond with an answer that meets your expectations
thanks
http://spaghettidba.com/2015/04/24/how-to-post-a-t-sql-question-on-a-public-forum/
________________________________________________________________
you can lead a user to data....but you cannot make them think
and remember....every day is a school day
January 22, 2016 at 12:01 pm
If you have dates in your table, you could use something like:
SELECT *
FROM Sales.Orders
WHERE Sales.Orders.OrderDate BETWEEN DATEADD(year,-1,GETDATE()) AND GETDATE();
You probably want to add 1 to the "one year ago" stuff...
BETWEEN DATEADD(day,1,DATEADD(year,-1,GETDATE())) AND GETDATE()
January 26, 2016 at 11:57 am
It helps if you put a table def in your question.:Whistling: Assuming Yearmm is an int this will work.
declare @endDate datetime=getdate()
declare @startDate datetime
set @startDate = dateadd(mm, -12, @endDate)
-- validate calculations
select
datepart(yyyy, @startDate) * 100 + datepart(mm, @startDate) as StartDate
,datepart(yyyy, @endDate) * 100 + datepart(mm, @endDate) as EndDate
-- select results
select *
from tableX
where Yearmm > datepart(yyyy, @startDate) * 100 + datepart(mm, @startDate)
and Yearmm <= datepart(yyyy, @endDate) * 100 + datepart(mm, @endDate)
January 27, 2016 at 2:01 pm
Thanks for the suggestions
I figured this out finally....
Yearmo IN (select [yyyymm] from table where datediff(month, clndr_dt, getdate())<=11)
🙂
January 27, 2016 at 4:29 pm
Again it is hard to assist here because you don't list a table structure. Your solution is using columns that don't exist is your sample data. You also seem to be selecting one column and then using that to filter a different column. I am thinking that is an oversite.
I will point out two issues with your solution. First it performs a calculation on a column and then filters that result. On a large table this is very inefficient. If you have 10 million rows in your table SQL will have perform your calculation 10 million times and then filter the result . If you have a million rows in your table that pass the filter, you will wind up with a million values in your in clause. Many will be duplicates.
It is far more efficient to perform the date calculations once on your input parameters to determine the desired min and max value in your table column and then filter on that. It means you calculate once and filter instead of calculating 10 million times and filter. Your algorithm would probably require two complete table scans. One would do the calculation and figure out the in Clause and the other would use the in clause to filter the table.
If you are using the YYYYMM as a way of grouping large amounts of data into accounting periods, it would make sense to make this the first part of a clustered index, which would allow the system to only read the range of data you are looking for rather than everything in the table. Two table scans becomes one seek.
January 27, 2016 at 7:49 pm
What Michael posted about the calculation on the column (which is called a non-SARGable predicate) is spot on. This is inefficient with a table of any size, but you'll really feel the pain as your table gets more rows. Calculate the single value using the parameter once (and with a proper datatype to match Yearmo) and then perform the comparison. I would suggest that you make this change no matter what performance is like so you or someone else won't have to revisit it in the future.
I have to caution you, though, about changing the clustered index. This can have serious repercussions on your table. Assuming your table already has a clustered index (which most tables should) take a look at creating a nonclustered index on Yearmo.
January 28, 2016 at 9:47 am
I would agree that the clustered index should be chosen carefully. Too many people just make it the primary key identity column. On an OLTP system this is probably a good choice.
The thing is that a YYYYMM type of column usually would indicate that this is a dimensional key used in more of a data warehouse / reporting type of table. If that is the case and you have 1,000,000 records a month and 10 years worth of data, it makes sense to use the YYYYMM column as the first part of a clustered index. If you try and generate a report on one months worth of data, a non-clustered index probably won't help. The system would need to either do a table scan or 1,000,000 lookups. At some point the optimizer will choose the table scan. I have often heard questions about "why is SQL doing a table scan when I have an index on the column". It is all a matter of weighing large numbers of lookups vs. a table scan.
Again this makes sense if:
1. The table is primarily used for generating report type information.
2. The YYYYMM column is being used to split the data into accounting or reporting periods.
2. The reports will typically be filtered based on the YYYYMM column.
Again I am making quite a few assumptions based on the very sketchy information provided. Anyone one using SQL Server for a data warehouse type of application should also look into column store indexes.
January 28, 2016 at 10:10 am
That's right!!! this table using Primarily for generating report and reports filtered based on YYYYMM column.
I really appreciate the responses and suggestions which makes me think and to get more knowledge
looks like this is the good place to know more about SQL related issues 🙂
Thanks again
January 28, 2016 at 10:20 am
No problem
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply