January 30, 2014 at 8:28 am
This is hopefully a quick and easy one to answer.
I have an SSAS Database with 1 cube. When we process the whole database it fails with errors about duplicate keys. When we process only the cube, it works.
Why would this happen?
This works:
<Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
<Parallel>
<Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200" xmlns:ddl300="http://schemas.microsoft.com/analysisservices/2011/engine/300" xmlns:ddl300_300="http://schemas.microsoft.com/analysisservices/2011/engine/300/300">
<Object>
<DatabaseID>MyDB</DatabaseID>
<CubeID>MyCube</CubeID>
</Object>
<Type>ProcessFull</Type>
<WriteBackTableCreation>UseExisting</WriteBackTableCreation>
</Process>
</Parallel>
</Batch>
This does not work:
<Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
<Parallel>
<Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200" xmlns:ddl300="http://schemas.microsoft.com/analysisservices/2011/engine/300" xmlns:ddl300_300="http://schemas.microsoft.com/analysisservices/2011/engine/300/300">
<Object>
<DatabaseID>MyDB</DatabaseID>
</Object>
<Type>ProcessFull</Type>
<WriteBackTableCreation>UseExisting</WriteBackTableCreation>
</Process>
</Parallel>
</Batch>
January 30, 2014 at 9:29 am
Processing is different depending on the state of whatever object you are processing (a cube, the whole db, a dimension)
Have a look here: http://msdn.microsoft.com/en-us/library/ms175634.aspx
My guess is that in your case processing the cube alone is not invoking a full process of all of the dimensions where there are duplicate keys at source.
Have a look at your dimension design - the error configuration in 2008 onwards defaults to "Custom" which does not allow duplicate keys in dimensions. You can change this to "Default" which allows this and confusingly isn't the actual default, or you can use composite keys in your key attribute key columns to avoid duplicate keys altogether.
January 31, 2014 at 6:02 am
It looks like the cause of this is an Incremental process that left several dimension in unprocessed states. So the incremental will try and fail to do Process Adds or Process Updates. But the reason the full database process won't work is indeed the duplicate attribute key errors.
This is good though because it's forcing me to really understand the processing. For once Process Full is not just the easy button to fix problems.
January 31, 2014 at 6:17 am
Don't forget if you are doing a Process Incrimental/Update to do a Process Index on the whole cube afterwards to rebuild aggregations.
This is quite a good article on using XMLA to process things and what the various options will do:
http://www.codemag.com/Article/1309061
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply