March 14, 2008 at 12:01 am
So here's the issue
We have a cube and the data mart for this cube on our server and we need to free up some space.
What we planned to do , is to leave the cube as is , and truncate the tables within the data mart .
After this new data will be loaded into the datamart , and we want the new data being loaded to the same cube along with what the cube already has in it .
Tried doing this and as long as the cube is not processed i can query the old data set (as expected) , but as soon as processing is done , all the dimensions got refreshed with the values coming from the new data set and i lost out on all my previous values for the dimensions.
any suggestions or alternative approaches are welcome
Thanks
Jay
March 14, 2008 at 9:03 am
I'm not sure if this will work... but when you process the cube I beleive the default setting is 'process full' I think there is also a setting for 'process update' but not completely sure. It maybe something like this your after. 'process full' recreates the cube so any old data will be lost
Gethyn Elliswww.gethynellis.com
March 14, 2008 at 9:08 am
This link may help too...
http://technet.microsoft.com/en-us/library/ms189431.aspx
Gethyn Elliswww.gethynellis.com
March 20, 2008 at 6:17 am
Hi Ellis
I tired Process FULL as Well as Update , and in either case the Data gets refreshed, I came to the conclusion that the way it works is SSAS queries the underlying tables using select statements. when it does this , as expected it returns NULL or new data and its these values that get stored in the dimensions once processing completes.
I am trying linked dimensions now ,by storing my data on another server and querying of it.
Will post if something comes up
Jay
March 22, 2008 at 7:13 am
You might be able to accomplish wha you're trying to do, but don't bother. There's way to many senarios where you'll need to do a full process and you'll need all of that data.
MSAS isn't a substitute for a database. It draws it's data from a relational source. If you're trying to keep some historical data that you don't really need, consider backing up your current cube, fixing it in tme, and processeding fresh from now.
If you're trying to free up space, review the data types, indexes and other aspects of your database design. Make sure you've optimized those structures before simply dropping data.
If you're trying to speed up that database, examine your clustering and partition strategies. When a database is primarily used to load a cube, it should be clustered by your cube partitions (and possibly partitioned by time).
If you're trying to save money, consider for a minute how cheap disk is. I'd rather get some cheap disk, move my oldest data to it (i.e. via partitions) than drop it out of the database.
March 23, 2008 at 10:32 pm
Hi Guys
The steps mentioned above have already been put into place or are not applicable to the scenario we are facing ,
partitioning of the cube along a month dimension; as well increasing storage space have been in place for quite some time. What we need to do now; is to move this data into production without affecting the data we already have there.
Basically we want to merge the data in 2 databases into a single SSAS cube and once merged we want to refresh the underlying tables so that we dont have 2 databases from which the data is being processed.
I am running out of options here 🙂
Jay
March 24, 2008 at 4:03 am
How much data are we talking about here? I guess we'd need to understand more about your specific senario.
Fundamentally, trying to manage your data where MSAS is the primary source isn't a good idea. That's not what it's for.
Unless you're into the terra byte range, storage is cheap.
Even then, you may be able to achieve your objectives by doing some summarizatioms on the database.
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply