April 22, 2005 at 2:08 am
Hi!
Here's the situation:
I'm using Analysis Services 2000 Enterprise Edition. I have a MOLAP cube set up in my OLAP DB. Its dimensions are all shared. Initial processing of the cube was OK. Experimenting further, I then proceeded to update the dimension tables. For example, dimension DA uses dimension table TA. DA has a level L with member key mapped to TA.F1 and member name mapped to TA.F2. I updated TA.F2 without updating TA.F1. I also added a new record to TA. After re-processing the cube (refresh data option was used with incremental update of dimensions also enabled), I found out that the member name column value was not updated. However, DA was already updated with the new record I added to TA. By the way, I use the built-in cube data browser of Analysis Services Manager to check
Here's the question:
Is this normal behavior? I mean, while I can expect new dimension table records to be added to the cube, is it expected that I can never update the values of a dimension level's member name columns? I understand that MOLAP cubes are not optimized to reflect changes that quickly. However, it's been about 24 hours (and n re-processes) now but the member name is still the one set during initial processing of the cube.
Please help and thanks in advance.
April 22, 2005 at 7:57 am
Hi,
I tried the same thing but it got updated.
I am using analysis services from some time but it never showed such a behavior.Dimension or cube gets updated as soon as you process(full or incremental) the cube.
I am having MOLAP cubes but it reflected the changes quickly.
One thing you can try that is install SP3 for sql server 2000.
April 26, 2005 at 4:17 am
Hi,
Thanks for the reply. I've solved the problem now, but not in the way that you suggested. Neither of the two service packs (2 or 3) for SQL Server 2000 solved the problem. However, when I changed the cube update process such that it first rebuilds the dimensions before fully re-processing the cube, the changes were now reflected.
My new concern now is performance. I'm guessing that this new implementation is a lot more resource intensive than my earlier implementation (just a refresh of cube data with incremental update of the dimensions). However, since the update will be a batch job that will no doubt be scheduled unobtrusively, then I guess this performance trade-off should be ok, right?
Any opinions?
April 26, 2005 at 9:09 am
Can you give me some details about your number of rows in dimensions tables and fact table? and how many cubes do you have which uses these shared dimensions.
I was having 2 cubes ,6 shared dimensions, 5 were having abt 100 rows and 6th one had 1 million rows and fact tables were having about 28 millions of rows.
I had one dts package which processed both cubes.I gave first cube update process as full process with incrementally update dimensions option on and in second cube that option was off.
If you have any other cubes which uses these shared dimension then performance will not hamper but if you have only one cube then you should go for private dimensions.
April 26, 2005 at 8:29 pm
Currently, I have 1 cube and 6 shared dimensions. The fact table has at least 2 million rows. The smallest dimension table has 600 rows while the largest has at least 10 million rows.
What would be the benefit if I make the dimensions private instead?
Also, going back to my earlier post, do you think that the problem could be solved (without having to rebuild the dimensions on every update) if I set them to private dimensions instead?
April 27, 2005 at 1:28 am
One reason why people use private dimensions is to decrease the reprocessing of cubes.
If your dimension is fully reprocessed then your cube would lose data and have to be re-processed. This is not the case with an "incremental" processing of the shared dimensions. Incremental processing will add new members, but not re-organize them.
As you said that you have only one cube and have to rebuild the dimension every time therefore I suggested for private dimension.
I tried my cube with private dimension; it was taking nearly same time to process. But 10 million rows is quite big. Full processing every time is a big question mark.
Where as shared dimensions are in effect "conformed" dimensions that can be shared across the organization. In most cases, you will want to create dimensions as shared rather than private. Shared dimensions across cubes allow for low maintenance and conformity.
Once I had one problem where I was not able to browse shared dimension data after processing it but I could see the data when I processed the cube. That dimension was having large number of rows.
So if possible try incremental update of dimension and then process the cube and then check whether you can see the updated record.
April 27, 2005 at 8:53 pm
"So if possible try incremental update of dimension and then process the cube and then check whether you can see the updated record."
I've tried it already and it doesn't work. Only new dimension records are added. Updates I make on existing dimension records are not reflected.
I also tried making the dimensions private. This too doesn't work. In either case (shared or private dimensions), only a full re-processing of the cube (and the dimensions for the case where shared dimensions are used) will solve the problem.
"I tried my cube with private dimension; it was taking nearly same time to process."
With this in mind, I think I'll just keep the dimensions shared as they are right now.
April 28, 2005 at 12:50 am
Apologies if you've already stated or checked it, but are your dims marked as 'changing'?
Steve.
April 28, 2005 at 5:50 am
Hmmm..., no they're not. I considered defining them as such but:
(1) From what I understood from the documentation, this only optimizes the dimensions for constant updates to the underlying tables. So I thought, this would not have any impact on whether or not the updates were reflected in the first place. Also,
(2) If I remember correctly, this option forced the dimensions to be stored as ROLAPs. Unfortunately, I could not get ROLAPs to work since I kept running into the "Invalid use of Null" error during dimension processing. I attempted (half-heartedly) to implement the requirements stated in the documentation on using Indexed Views to no avail. I say "half-heartedly" because I didn't really want to take the ROLAP option since I've read that they perform poorly in terms of query response time.
Please correct me if my conclusions were wrong.
April 28, 2005 at 5:41 pm
I agree with your conclusion (1) re: it shouldn't make a difference being changing or non-chaning, as reading the doco re: dimension processing and in particular incrementals it refers to this being a good option for the inclusion of new members and /or the chaning of existing properties, and you would have thought that the name (not the key) is a property that could be changed.
Definitely don't need to have a changing set to ROLAP. If you check the BOL entry re: changing dimensions, it indicates that you only need to change some of the uniqueness settings for the top and lowest levels and the aggregation setting.
Can i ask some potentially stupid qurstions, but you have ensured that the name and key fields are set to the two different table columns? Do you have a dev environment where you could do a test, something like changing several member names and seeing if any of these come through as changed? You really hsouldn't need to change the dim type to changing, it should pick up the new names accordingly, so either there is something simple that's been missed or a problem that no-one else has encountered yet?
Steve.
April 29, 2005 at 12:48 am
"Definitely don't need to have a changing set to ROLAP. If you check the BOL entry re: changing dimensions, it indicates that you only need to change some of the uniqueness settings for the top and lowest levels and the aggregation setting."
Yep, you're right. Sorry for wrong info. Anyway, I tried it a while ago and it seemed to solve the problem also (see below). Although now my problems are: (1) one of my dimensions has n-plicate members at the lowest level, and (2) it seems that, according to the documentation, changing dimensions respond slowly to queries and should therefore only be used if it is important that users see changes to the underlying tables as quickly as possible.
"you have ensured that the name and key fields are set to the two different table columns?"
Yes, I have.
"Do you have a dev environment where you could do a test, something like changing several member names and seeing if any of these come through as changed?"
Yes, I tried it again a while ago and none of the updates got through. But when I set one dimension to "changing", merely processing the cube, with "refresh data" and "incremental update" options selected, did the job.
"either there is something simple that's been missed or a problem that no-one else has encountered yet?"
I'm really totally new to Analysis Services and have only the documentation and this forum as sources of information. It's possible that I might have missed something. I keep thinking that this might only be a set-up problem or something.
April 29, 2005 at 1:31 am
WIth the dims not set to 'changing' , have you tried processing them (incrementally) as a seperate activity to processing the cube (ie right click on dim and then process rather than RC on cube and process)?
I didn't think that you needed to set them to changing, but even with the changing you shouldn't be getting dupes. I thought that the setting of uniqueness of the members within the levels would have covered that off.
Steve.
April 29, 2005 at 3:46 am
"WIth the dims not set to 'changing' , have you tried processing them (incrementally) as a seperate activity to processing the cube (ie right click on dim and then process rather than RC on cube and process)?"
Yep, I tried this already and it also didn't work.
"I didn't think that you needed to set them to changing, but even with the changing you shouldn't be getting dupes. I thought that the setting of uniqueness of the members within the levels would have covered that off."
Sorry for the confusion. What I meant was that, the lowest-level members of one of the dimensions are n-plicated due to the way the system is designed. The "Member {Keys|Names} Unique" options are set to false in all of the levels of all the dimensions (except for the highest level of each dimension where "Member Keys Unique" is set to true). I think this is the default value given by the New Dimension Wizard.
April 29, 2005 at 2:02 pm
We have some 'changing' dimensions here, the only issue is that every time you process the dimension, AS has to recalculate the aggregations of each cube containing the dimension, depending on how big is your cube and how many aggregations you designed, this will have some performance impact on the server.
Viewing 14 posts - 1 through 13 (of 13 total)
You must be logged in to reply to this topic. Login to reply