Guided Search Method

  • Not so far as I know.

    [font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
    Proactive Performance Solutions, Inc.
    [/font]
    [font="Verdana"] "Performance is our middle name."[/font]

  • You can simulate an indexed view... just make a preaggregated table as suggested before.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Agreed.

    [font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
    Proactive Performance Solutions, Inc.
    [/font]
    [font="Verdana"] "Performance is our middle name."[/font]

  • I've taken some time to experiment with rbarryyoung's 3 recommendations. Here's what I have come up with so far.

    1. Yes, this is the approach we're using, but doesn't seem efficient.

    2 & 3. We're performing a search based on about 15 different parameters which could all have various values. To pre-populate a table with the number of permutations this results in would end up with a table with an order of several magnitude more records than the original data table. Seem to be going backwards here.

    I have tried some variations of CUBE and ROLLUP queries, but after applying the grouping across all the variables, the totals for each variable are diluted away. Here is an example of what I was doing:

    SELECT CASE WHEN GROUPING(CarTable.[Year]) = 1 THEN 999999

    ELSE CarTable.[Year]

    END AS Year,

    CASE WHEN GROUPING(CarTable.Make) = 1 THEN 'All Makes'

    ELSE CarTable.Make

    END AS Make,

    CASE WHEN GROUPING(CarTable.Model) = 1 THEN 'All Models'

    ELSE CarTable.Model

    END AS Model,

    CASE WHEN GROUPING(CarTable.FuelType) = 1 THEN 'All FuelTypes'

    ELSE CarTable.FuelType

    END AS FuelType,

    COUNT(ID) AS RowCounts

    FROM CarTable

    WHERE (Year > 1999)

    AND (Year < 2008)

    GROUP BY Year,

    Make,

    Model,

    FuelType

    WITH ROLLUP

    Any recommendations on how to do it differently?

  • I think that what you may really need here is something called a dynamic dimensional model or dynamically updated cube. However, the details and statistics are really important for something like this. What you have described so far would have 15 dimensions, can you tell us how values does each of these 15 dimensions have (on average). Also, you still have ansered my previous questions on your load profile:

    So, if you could fill some of this information, it would be helpful: like How many rows are in the source table (#CarData) and how much space is it occupying? Is changing this table an option? How many transactions/min do you get or are you expecting, and how frequently is this specific procedure executed (during peak hours for both)?

    [font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
    Proactive Performance Solutions, Inc.
    [/font]
    [font="Verdana"] "Performance is our middle name."[/font]

  • I just had another idea... maybe it can lead you on a new path...

    how about using temp tables to build the resultset but with twist :

    1st : analyse all the possible filters and figure out which one are the most selectives, index those on the base tables (keep in mind that you can use covering indexes there to cover multiple filters at once)

    Then do this next part in that index order starting with the most selective :

    INSERT INTO #temp (Columns) Select Columns from table where Criteria one.

    Now go back to temp and delete the non required data from there (still having built the correct indexes).

    Using this might give you the chance to work on a intermediate dataset 100 times smaller than the base table(s).

    I'm well aware that much work could go into this... but it might get the job done with awesome performance.

  • Here's another wild idea. Is there really any reason to use a temp table? How about sticking the results into a real table with a unique ID for that particular result set and then re-querying for the grouped results? A final DELETE statement could be executed at the end to cleanup the table. Would transaction logs be handled any differently?

  • I'm just guessing here... but aren't #temp table transaction NOT logged?

    One way to use your idea would be to create a db just to hold those results and put the recovery to simple so that the log is not an issue.

Viewing 8 posts - 16 through 22 (of 22 total)

You must be logged in to reply to this topic. Login to reply