February 26, 2019 at 9:34 am
Hello,
I was just wondering if anyone has a good idea on how to implement a set of filters that we are always implementing in ALL of our queries. Every year they come up with different values or add more to the same type of fields. I was wondering if implementing a table with all these fields and values would be the best way to go about it or if there are any other ways that I can use in order to NOT have to go into every stored procedure and having to change the WHERE clause to update with new values.
Example of the WHERE Clause we have on every query:
HERE rm.RetrievalMethodDescription NOT IN ('Value1', 'Value2') --Take out Self Retrieval Methods
AND (om.RetrievalTeam = Value1' OR om.RetrievalTeam = 'Value2') -- REMOVE SELF RETRIEVAL TEAMS
AND ((c.PNPCodeId <> 51 AND c.PNPCodeId <> 52 AND c.PNPCodeId <> 53) OR c.PNPCodeId IS NULL) -- REMOVE SELF RETRIEVAL CODES
AND pt.ProjectTypeDescription NOT LIKE 'Value1%' --Not part of Chart Review, take it out.
AND pt.ProjectTypeDescription NOT LIKE 'Value1%' --Not part of Chart Review, take it out.
AND LOWER(pt.ProjectTypeDescription) NOT LIKE '%Value1 --Remove Self Retrieval Project Types
AND LOWER(pt.ProjectTypeDescription) NOT LIKE '%Value1%' -- Mailroom projects are for third party intake charts - not actual retrievals by Ciox
AND LOWER(pt.ProjectTypeDescription) NOT LIKE '%Value1%' --not part of chart review - processed differently
AND LOWER(pt.ProjectTypeDescription) NOT LIKE '%Value1%' --For projects that were scheduled before the file loaded. Does not happen any more, but take it out.
AND pt.ProjectTypeDescription <> 'Optum Loaded' --Closed way back(probably 2016), take it out.
AND cs.ChartStatusDescription in ('Value1','Value2','Value3','Value4,'Value5')
AND pt.ProjectTypeDescription <> 'Value1' --Coding only project. Take out.
AND pt.ProjectTypeDescription <> 'Value1' --Self Retrieval Project
I was thinking about creating a table with these field names as columns and then add all the values that keep getting updated on a table and then LEFT OUTER JOIN to it and exclude from there so we can just maintain that table and not have to update every query individually.
Any other ideas are much appreciated.
Thanks for your help!
February 26, 2019 at 9:54 am
I'm probably going to get grief for saying this, but have you considered creating a view containing the filters you always use and then querying that? I freely admit that it's a tradeoff. It encapsulates (hides) some of the complexity, but on the other hand, it makes the querying on this a LOT easier.
February 26, 2019 at 10:01 am
pietlinden - Tuesday, February 26, 2019 9:54 AMI'm probably going to get grief for saying this, but have you considered creating a view containing the filters you always use and then querying that? I freely admit that it's a tradeoff. It encapsulates (hides) some of the complexity, but on the other hand, it makes the querying on this a LOT easier.
Hmm, I'd agree, however, that WHERE is far from SARGable; if that clause need to be in every query, then every query is never going to be able to use an index on the OP's database. I'd suggest that a VIEW could be the right idea, but that WHERE needs to be made SARGable first. Otherwise that is going to have some major performance concerns. Personally, I think there is a "larger" problem hiding under the surface here.
Also, this is on the caveat that none of the objects in the FROM are a VIEW either.
Thom~
Excuse my typos and sometimes awful grammar. My fingers work faster than my brain does.
Larnu.uk
February 26, 2019 at 10:12 am
Thom A - Tuesday, February 26, 2019 10:01 AMpietlinden - Tuesday, February 26, 2019 9:54 AMI'm probably going to get grief for saying this, but have you considered creating a view containing the filters you always use and then querying that? I freely admit that it's a tradeoff. It encapsulates (hides) some of the complexity, but on the other hand, it makes the querying on this a LOT easier.Hmm, I'd agree, however, that WHERE is far from SARGable; if that clause need to be in every query, then every query is never going to be able to use an index on the OP's database. I'd suggest that a VIEW could be the right idea, but that WHERE needs to be made SARGable first. Otherwise that is going to have some major performance concerns. Personally, I think there is a "larger" problem hiding under the surface here.
Also, this is on the caveat that none of the objects in the FROM are a VIEW either.
And indexed view could help, but using them come with several consideration that should be accounted for.
February 26, 2019 at 10:19 am
(Maybe I should have read more carefully!)
Those NOT LIKE and <> s are going to cause table scans. Performance will be brutal.
February 26, 2019 at 12:37 pm
pietlinden - Tuesday, February 26, 2019 9:54 AMI'm probably going to get grief for saying this, but have you considered creating a view containing the filters you always use and then querying that? I freely admit that it's a tradeoff. It encapsulates (hides) some of the complexity, but on the other hand, it makes the querying on this a LOT easier.
Thanks for your reply.
So basically instead of creating a physical table you recommend creating a view of all the exclusions I have and then using that view to join to the main query and exclude everything in the view?
Thanks again.
February 26, 2019 at 12:42 pm
What I'm trying to get at is this: Is there another way of doing this search?
All the NOT LIKE and NOT IN clauses are going to kill performance. Think about it this way. Look in the phone book and find everybody with the last name of "Smith" (easy enough, because you can use an index) and then nobody with a first name that contains an "E". Basically, you have to read every single "Smith" record. And that's going to kill performance. Is there another way of expressing your filter?
February 26, 2019 at 12:51 pm
Have you tried creating persisted computed columns that would serve as flags to identify valid rows? Just one column per table could give you the functionality and won't have a big impact on storage if you make them a bit data type.
February 26, 2019 at 12:52 pm
What about a calculated field (or a a couple of calculated fields) that can be indexed and used as a more efficient filter.
SNAP. Luis beat me to it.
February 27, 2019 at 10:47 am
pietlinden - Tuesday, February 26, 2019 12:42 PMWhat I'm trying to get at is this: Is there another way of doing this search?
All the NOT LIKE and NOT IN clauses are going to kill performance. Think about it this way. Look in the phone book and find everybody with the last name of "Smith" (easy enough, because you can use an index) and then nobody with a first name that contains an "E". Basically, you have to read every single "Smith" record. And that's going to kill performance. Is there another way of expressing your filter?
Unfortunately I inherited this code from before and as we add some projects that we don't want to include that start with name 'ABC-%' is the reason why we started using LIKE and NOT LIKE. We are only talking about less than a hundred projects that we exclude overall but the problem is that we continuously keep adding more projects and codes that need to be excluded from our counts.
Having said that, I was looking for a more efficient way to store these somewhere and then just use a LEFT OUTER JOIN to replace the WHERE clause in all of our queries and be able to add or remove projects on this permanent view or table that would be easier to maintain and not have to change all the queries every time.
Unfortunately we don't have the ability to add a Active column to the DW that we can use to filter through those.
Thank you so much for your help on this. 🙂
February 28, 2019 at 2:50 am
May well not apply in your case, but it may be possible to set an additional flag on each record that says if it is to be considered or not.
This would be set initially (or when criteria change) in a batch run, then when the record is inserted or updated, possibly in triggers.
So you do the hard work once and have a simple test for each retrieval.
In the event that you wanted to sometimes test some criteria and not others you could set a bitmap for each condition (or related set of conditions), and then either test for <> 0 for any exclusion or use a bit-wise AND (&) to test for selected criteria.
February 28, 2019 at 2:25 pm
Seems like it might be worth considering dumping the filtered data into its own tables as part of a daily process or something. And then querying those tables.
February 28, 2019 at 2:39 pm
Could create an exclusion table containing the project IDs (id/name/etc.) to ignore and use a NOT EXISTS clause on the project ID.
March 4, 2019 at 4:20 pm
Rob Buecker - Thursday, February 28, 2019 2:39 PMCould create an exclusion table containing the project IDs (id/name/etc.) to ignore and use a NOT EXISTS clause on the project ID.
That is exactly what i had in mind but wanted to see if creating a view of all these and the LEFT OUTER JOIN to it would be better for performance and easy maintenance as I would just have to change the view and not an actual table that i can still index etc.
March 5, 2019 at 1:29 pm
Thom A - Tuesday, February 26, 2019 10:01 AMpietlinden - Tuesday, February 26, 2019 9:54 AMI'm probably going to get grief for saying this, but have you considered creating a view containing the filters you always use and then querying that? I freely admit that it's a tradeoff. It encapsulates (hides) some of the complexity, but on the other hand, it makes the querying on this a LOT easier.Hmm, I'd agree, however, that WHERE is far from SARGable; if that clause need to be in every query, then every query is never going to be able to use an index on the OP's database. I'd suggest that a VIEW could be the right idea, but that WHERE needs to be made SARGable first. Otherwise that is going to have some major performance concerns. Personally, I think there is a "larger" problem hiding under the surface here.
Also, this is on the caveat that none of the objects in the FROM are a VIEW either.
Looks like "cs.ChartStatusDescription in ('Value1','Value2','Value3','Value4,'Value5')" is SARGable. Chances are there's going to be a predicate that can use an index. Its a good point to make sure indexes get used, but just wanted to make sure that folks don't start thinking one bad predicate ruins the entire query.
Viewing 15 posts - 1 through 14 (of 14 total)
You must be logged in to reply to this topic. Login to reply