Drill Through after recreate tables

  • Hi,

    I have a cube based on tables on a SQL Server database with the drill through enabled.

    The drill through works fine as long I do not recreated the tables (the tables are recreated daily for performance reasons - drop, select into, create indexes). After the tables are recreated I need to:

    • Disable drill through.
    • Save cube.
    • Enable drill through.
    • Save cube.

    Does any of you have a better solution?

    The problem is that the process takes lot of time to execute, either manually, either applying it thru DSO.

    Thanks in advance.

  • How about drop indexes, truncate table, insert into, create indexes.

    Doing this should be just as quick and won't affect your drill through.

    Question, do you reprocess the cube daily? Reason I ask is - drill through looks directly at the base table, so if you update the table and not the cube, the drill through data could contradict what the cube reports.

  • The problem is that after 12 months (the cube as information relevant for one Fiscal Year) the number of rows in the fact table can be over 20 million rows. Insert into will use huge log while select into does not uses log.

    Do you know if enabling the drill through and saving the cube is slower if there are aggregations in the cube?

  • You could trying a partitioning strategy, ie. Partition the tables by month and create partitions on the cube that look at the tables, this would also increase cube responce time. Another option could be to re-evaluate your recovery, could you change it to Simple? or even, keep the fact table in a seperate database that does have a simple recovery mode.

    I'm not to sure about your question on the drill through, could you provide more detail?

  • The cube as 12 partitions with data slices for increasing the retrieve response. However, the relational table is just one because on July I can have data for April.

    The question reagrding the aggregations having impact on drill through I think is indirect. Saving the cube with aggregations is what is slow and has nothing to do with drill through. But since I have to do 2 saves of the cube (disable DR, save enable DR, save) having aggregations delays the process.

  • A couple of questions:

    you've mentioned that for performance reasons you drop and recreate the tables (and indexes) each day, and that insert into (as per wildh) is not an option.  Have you tried or considered a mid-point, dropping your indexes, doing the inserts and then reapplying the indexes?  What is your average daily load count for records?

    re: your 'data in july for april', depending on the latency allowed to your data by the business, you could still partition the base table (partitioned views) so that each months data goes to a different table, which could make you inserts faster in the DB and make processing faster for the cube (ie jus process an appropriate cube partition, or partitions).  If you check the timings on your data, you could find that processing the last 3 months prtitions gives you almost perfect results.  And then maybe do a full process of all partitions on the w/e or at month end.

     

    Steve.

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply