May 4, 2005 at 6:22 am
We have created a Sales data warehouse using SQL Server 2000 Std. and Analysis Services but are having serious performance problems when we drill more than 4 levels deep into the data. The cubes are quite small, with the fact table containing only 130,000 rows and the largest dimension (Item) containing 9000 members. We have tried aggregating to both the hightest level of storage and performance with no change in the results. Watching the performance on the server side shows that the drill into the 5th or greater dimension is a mere blip on the processor...the queries drive the client side cpu to 99% and eat up memory quickly. The query results typically take anywhere from 20 minutes to never displaying...
I understand a lot more detail may be needed to understand the structure of the data warehouse etc.. but has anyone encountered a similar experience with Excel and the PTS and have any suggestions to try.
btw, we are running with the latest service pack on SQL Server and the latest PTS in Excel 2002.
regards.
Steve
May 4, 2005 at 5:09 pm
Definitely noticed this as a problem, PTS can clag things up extremely well. Some things to check are:
i) I know you said this but make sure that the client and server are both running the same PTS. When you say that 'latest sp on SQL Server' make sure this also means latest SP on Analysis Services also, they are two distinct installs. The client PTS install is in the SP for AS also, make sure your client has this installed.
ii) there are some settings you can change on the client connection string to (try to) push processing back to the server, if you check my previous post (http://www.sqlservercentral.com/forums/shwmessage.aspx?forumid=17&messageid=12598) you'll see the settings that made a difference for us.
iii) are you using a lot (any?) calculated members that will caus the execution to occur on the client? Again, Spoffords book 'MDX Solutions' (which btw they're working on 2nd Edition) has details on what can cause this.
iv) lastly, how many 'nests' are you doing? When researching this issue I could caus the Foodmart:Sales cube to lockup and either never return or take >5 minutes to return just by doing some crazy queries (ie nest 5 dims on rows and 5 dims on cols), so to an extent the size of the cube isn't an over-riding factor, the design of the queires play a very large part. Bill Pearson(?) (the third) has written several articles on dbjournal (or similar web mag) that address trying to write queries for performance.
It's also interesting to note that PTS goes bye-bye in AS2K5, leaving client side caching for the clietn app developers.
HTH
Steve.
May 5, 2005 at 7:26 am
Another, more simple factor could just be your aggregations. If You do not have the necessary aggregations on your partition(s)/cube then raw data will have to be queried to return results. One sign that this is happening is IO goes through the roof on the server while you are waiting.
May 5, 2005 at 5:42 pm
If you wanted to test this theory, set your logging of user queries to 1 (instead of deault of every 10), run the query/s (check the log to ensure they ended up in there) and then use the usage based aggregation wizard. This *should* then create the aggs based on the usage. You could confirm the aggs by using the viewing tool (later version available within SSABI) and change them as nexessary.
This cube sounds reasonably small, but this could be the problem if the dimension count and measure count is high. keep in mind that where it can AS will use aggregations that get it close to the result and then aggregates these aggregates as required to get the result.
Lastly, don't forget to check your performance counters to see where the service is getting it's data from.
Steve.
June 28, 2005 at 11:50 am
Hi,
I am facing this problem is some situations where the OLAP server runs the query in less then 1 second (according to the query log) but Excel takes 50 minutes to refresh the data. During this 50 minutes the CPU usage on the client is 100%.
Some of the situations when this problems arises is when trying to use 2 or more dimensions with large number of members; when the members are large descriptions; etc..
As posted in another forum (http://www.sqlservercentral.com/forums/shwmessage.aspx?forumid=17&messageid=194220#bm194691) I have heard of using XML/A instead of PTS. However, I do not know to use it.
June 29, 2005 at 6:41 am
Hi Tiago,
Going to XMLA isn't really going to make a huge change for you, and depending on how you use it, it could take longer (changing of cellsets to XMl then transmit on the line nd then you/your software needs to XSLt/transform the xml into a presentation format). With the current xmla (ie for As2K) it still ues PTS (pivot table services) under the covers, really the xmla bit is a web service wrapper for PTS.
The 1 second on the server piece is caused by queries either not being forced to run on the server or queries that can't be run on the server - hence the client thrashing.
Have you looked at the presentation (link) I posted recently re: MDX tuning? This is meant to cover off the 'defultexecutionlocation' etc etc properties which can (in certain cases) get the query executed on the server. Just be aware that even though it may be processed on the server, it doesn't necessrily mean that it will execute in vastly shorter times.
hope this helps,
Steve.
June 29, 2005 at 9:16 pm
Hi,
I had not much experience on OLAP but i faced the same problem .
The one i tried is create non clusted index on every column on the dimension and fact table .
no Non clustered index on measurements.
and process the cube agin and see the performance from the pts.
hope this helps u
from
sufian
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply