Slow Queries

  • Well it looks like I have resolved the slow select. The problem was caused by the following lines:

    where FT.FT_FISCAL_YEAR=FISCAL_YYYY

    and FT.FT_FISCAL_MONTH=FISCAL_MM

    in the first half of the union and the corresponding lines in the second half:

    where OP.OF_FISCAL_YEAR=FISCAL_YYYY

    and OP.OF_FISCAL_MONTH=FISCAL_MM

    The fields FISCAL_YYYY and FISCAL_MM are calculated fields returned by the view vfsdates. The problem is that the fields returned were integer fields, whereas the fields FT.FT_FISCAL_YEAR and OP.OF_FISCAL_YEAR are smallint and FT.FT_FISCAL_MONTH and OP.OF_FISCAL_MONTH are tinyint. These fields are the high order columns in clustered indexes on the FT and OP tables. Apparently because of the type mismatch, the query optimizer decided on a full index scan instead of an index seek on the clustered indexes. I tried various methods to correct the problem and all methods which provided a type matched clause reduced the execute time by a factor of 100. Specifying a literal value (e.g. FT.FT_FISCAL_YEAR=2003) also resulted in the fast execution. Any method which retained the type mismatch resulted in the slow execution. I finally decided that the best solution was a change to the vfsdates view which does an explicit convert from integer to smallint for FISCAL_YEAR and tinyint for FISCAL_MONTH.

    I would have to guess that the query optimizer will not convert the integer to smallint and tinyint for the query, but insists on converting the smallint and tinyint from each row of the large table to int before comparing it to the int value. This method would ensure that values too large for the smallint or tinyint fields were properly rejected. However, this seems to me to be an invalid analysis. The query optimizer should be able to use index seeks for values which fit in the smaller fields and suppress lookup altogether for values which are too large for the fields.

    Many thanks to Antares and Scorpion (interesting that Antares is the "Heart" of the constellation Scorpio) for the efforts on my behalf. Be interested in some final thoughts on this topic.

  • LMAO.....That is quite interesting.

    I didn't have the time this last weekend to persue this as I had hoped. I apologize. I VERY much believe that you should re-work those views, even in light of the conversion issue which sounds like it brought your performance up to acceptable levels. It may not be worth the effort today, but I bet it comes up again if you don't.

    If your still interested in some improvements, post the table schema, or E-mail it, and I'll spend some time on it in the evenings. It's easy enough to create the tables and fields to work with, but the datatypes are anybody's guess in some of these fields. I don't mind at all, and you would be amazed at the increases you can realize in this case.

  • The system is quite complicated and I am not sure it is worth sending copies of the tables out for further analysis (though I certainly don't mind if you are interested). I originally thought there was a problem because of the large discrepancy between the production and (new) development server, not clueing on to a minor change that accidentally forced a problem for the query optimizer.

    I wil say that I am a fan of complex queries, and that I have some that are quite a bit more complex than this. I feel this way because I think it is more important to deliver the data the users want than to worry about overloading the server. Another reason is that I trust the query optimizer to do a good job if I can properly specify my query. In my experience, MSSQL can deliver very good performance on extremely complex queries provided it has a reasonable database design.

    As to the query in question, it is the basis of a "User Defined Reporting" system for financial information. The two main tables in the join provide balance forward and current month data respectively. The large number of items listed in the view are simply to give the users a choice of items to report. The actual user query would filter, sort, and report on a smaller number of items (usually under 10) selected from those available. I trust the query optimizer to optimize out the references to unused items, and I think it does well at it. The views underlying the main view (which include cross joins, etc.) should not be an issue. They are based on two parameter tables which have 1 row each and are used to simplify the selection of the transactions by fiscal period for reporting. These views are an example of where a quite complex set of views can simplify the user processing without adding any significant overhead (as long as I'm careful).

  • Well-done guys, interesting and worthwhile to read!

  • If your happy, then good enough. I have yet to find the query optimizer perform as adequately as I do myself, though. Try not to place too much faith in it, and keep in mind that it is generic, and designed for the masses. I am not saying it doesn't do a good job, as in most cases its ok, but it has a long way to go before it is as good as a well trained DBA. Just as an example, your very issue concerning the conversions wasn't handled by the query optimizer.

    I find that a lot of people rely on the index wizard as well, and again, in my opinion, too much faith shouldn't be placed there either. I look at it as similar to the differences between a moped and a jaguar. Sure, the moped will get you there, but wouldn't you rather drive a Jag......(grin) (Of course, the Jag does have a much higher maintenance cost.....)

  • Yes, sorry I didn't see enough detail myself either to offer good offhand improvements either. And I agree with Scorpion in that even thou the tools do a great job on the mass level. You should stirve to understand what the output means, why it does what it does and learn what effects it best. I don't like Index wizard as I tend to have more details on what effects the system overall and throwing indexes at a problem doesn't always mean you actually fixed the problem, just that you may have extended the time for it to show up again. Yes good indexes are essential but so is good coding and reviewing the porrest performing code to try and improve.

Viewing 6 posts - 16 through 20 (of 20 total)

You must be logged in to reply to this topic. Login to reply