October 1, 2019 at 4:25 pm
Hello, I have a SQL DB where we have created some views based on dim and fact tables. I need to build SSAS tabular model based on my tables and views. But one of the view runs for 1.5 hour inside SQL query (SSMS). Now I need to use this same view to build my SSAS tabular model but 1.5 hour is not acceptable. This view is made up of lot of table joins. So 1) if I bring all these tables being used in this view inside my SSAS tabular model but then I am not sure how to join them all and use where clauses inside SSSAS and build something similar to my view. Can you help me with that? or 2) if there is a better easy way to achieve this? Thanks.
October 2, 2019 at 2:27 pm
IMO, the tabular model should be kept simple. You want it be easily consumed by BI users without a ton of modeling transformations. I would look hard at the execution plan to see why my view is so slow. Start there. What do you find? After you check that let me know. There are options to make it faster.
October 2, 2019 at 3:01 pm
Thanks a lot @heb1014. So When I look at the view it uses around 10 tables join and one of the table has 3 billion rows and uses partitions. It has only 1 clustered index. The other tables are not large. The result of the view produces 1.9 miliion rows. I am not sure about the best practices around partitioned tables and how to improve performance etc.
October 2, 2019 at 3:11 pm
That's a decent size table 😉
I would shove any intimidation I had about such a large table and press forward with tuning my query. I'd probably start with getting a restored database on a server where I could do anything I wanted. I don't want my work to interfere with other workloads and I want to be able to change things without any concern.
Having a single clustered index is probably an issue. Maybe the main application doesn't need more than that, but other queries (like those loading Tabular models) likely do.
After you're sure your work won't impact others, you should run the query and look at the execution plan and query stats. Just off the top of my head, I'd start small:
main point - keep working at it until you make it faster. I've seen very small adjustments (like adding a filtered non-clustered index) yield MASSIVE improvements.
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply