Unnecessary SORT operation

  • OK, I will try but here's my problem (and I'm hoping you can suggest a solution!).

    The reason I use views is because the views bring back values that are used to select the rows that a user is allowed to view. For example, TestUser logs in, runs "select * from MyView" and only a subset of data will be returned. Let say that TestUser can view data from supplier S1 with permissions P1 and P2.

    At the same time if SecondTestUser logs in and runs "select * from MyView" they will get a very different subset of data since they have different permissions.

    However these values can be changed by a security admin who will use an application to set the values. So in order to preserve the dynamic nature of the security permissions I'm using views.

  • There's no magic here. This needs to be rewritten to NOT use 1 trillion tables (if possible).

    Performance is not magic, it's do as little work as possible (server work, not human dev work).

  • Didn't realise 1 table and 2 views = 1 trillion tables. Looks like I'll just have to stick with what I have.

  • feersum_endjinn (11/14/2011)


    Didn't realise 1 table and 2 views = 1 trillion tables. Looks like I'll just have to stick with what I have.

    Just imaging what's happening.

    You don't have 1 view & 3 tables you have at LEAST 9 tables being touched. That's why it takes so much time.

    You have no less than 9 table & CI scans.

    The less you do, the faster it can be done. As simple as that.

  • Ninja - I tested your theory by temporarily creating two tables to hold the data that the views returned and as you said it solved the problem. The execution plan now has just 10 steps and the query executes in ~50seconds. This I can live with!

    However I still don't understand why the optimiser decided to do a sort on 1.8 million records when I used the views, when in essence the views returned the same number of rows as the two tables I setup.

  • feersum_endjinn (11/16/2011)


    Ninja - I tested your theory by temporarily creating two tables to hold the data that the views returned and as you said it solved the problem. The execution plan now has just 10 steps and the query executes in ~50seconds. This I can live with!

    However I still don't understand why the optimiser decided to do a sort on 1.8 million records when I used the views, when in essence the views returned the same number of rows as the two tables I setup.

    When's the last time you did an update stats with fullscan?

    Bad plans are usually because of stale stats, very complex query or bad parameter sniffing.

  • Data is imported into these tables via an SSIS package and I have an index rebuild on all the tables as the penultimate step - would this not update the stats as well?

  • feersum_endjinn (11/16/2011)


    Data is imported into these tables via an SSIS package and I have an index rebuild on all the tables as the penultimate step - would this not update the stats as well?

    Only the stats on the indexed columns.

    However if you truncate + reload, then the first query hitting that table will update the stats. I preffer to force it if I have the time.

  • Oh okay. Well the database option "Auto Update Statistics" is set to "true" so I'm hoping it will update all index based stats. Anyway now I have a possible solution to the problem. I just need to figure out how to actually replace the views with tables and not lose the functionality - I think a LOGON trigger or a UPDATE trigger on the security tables is the way to go.

    Thanks for your help.

  • feersum_endjinn (11/16/2011)


    Oh okay. Well the database option "Auto Update Statistics" is set to "true" so I'm hoping it will update all index based stats. Anyway now I have a possible solution to the problem. I just need to figure out how to actually replace the views with tables and not lose the functionality - I think a LOGON trigger or a UPDATE trigger on the security tables is the way to go.

    Thanks for your help.

    No need to duplicate the data. Rewrite 1 query that does specifically what needs to be done here. Don't use any views, only base tables.

    If it continues to be slow, divide and conquer (smaller steps with results in temp tables).

    I have reports that run on 50 tables here and they don't have any issue perf wise. They just have to be split in smaller steps.

Viewing 10 posts - 16 through 24 (of 24 total)

You must be logged in to reply to this topic. Login to reply