Many-To-Many Relationship Modelling in PowerPivot: Workaround

So far PowerPivot does not support many to many relationships. It is still early days and I am sure that things will change soon. Moreover, there are a few ways to implement many-to-many relationships for measures in DAX. Two notable posts on the subject can be found on Marco Russo’s and Alberto Ferrari’s blogs:

http://sqlblog.com/blogs/marco_russo/archive/2009/12/07/many-to-many-relationships-in-powerpivot.aspx

http://sqlblog.com/blogs/alberto_ferrari/archive/2010/10/19/powerpivot-and-many-to-many-relationships.aspx

Unfortunately, if we have multiple measures, the approach is somewhat cumbersome as we need to build numerous DAX measures. I have been playing around with modelling options and I just came across an idea which, while not perfect, can help us to mitigate to some extent the lack of this crucial bit of functionality. In fact, “not perfect” is quite an understatement – the more appropriate expression would be “a hack”, but it could still be useful in some scenarios – hence this post!

To put some background, let’s assume we have this simplistic model:

An account table

A customer table

And a fact table

Here we have two accounts, A with an Amount of 100 and B with 50.

Now, let’s assume we have the following mapping table between Accounts and Customers:

Account A maps to customers X and Y; Account B to X only.

Now, instead of trying to directly map the relationships here, we add the Amount to the mapping (bridge) table:

And we then create our relationships in PowerPivot. The problem here is that when we slice by Account A (with a key of 1), we would get the 100 Amount doubled. If we had more than two customers mapped to the account, it could have been tripled, or in fact multiplied by N, where N is the number of mapped customers.

The trick I am offering is adding a negative amount to this impromptu fact table against the same account (A) and against a Z customer with a key of -1, which acts as an unknown/adjustment bucket. Therefore, the Customer table would become:

And our fact/bridge table:

Now, if we create relationships between this table, the account and customer tables we would see the following:

And

Well, we do get one extra row with negative amounts, but all else seems just fine. The total is correct and the amounts per account and customer are also correct. If we have more than one measure, we will have to negate each of the additional measures as well if we want to achieve the same behaviour.

I also created a slightly more complicated model, which included years:

Here we have to compensate for two rows with negative amounts but the experience of using this model is not much different than the previous:

Some other common scenarios could also work quite well. I have tried with two M2M relationships and while the number of rows gets progressively larger, the model still seems to work. Another common scenario would be to have more than 2 attributes in either of the tables (whoa!), and that seems to work very well, too. The key is to remember to negate/adjust for each mapping after the first one. If we had a third node in our X,Y table mapped to A, we would have two additional rows in the fact table – one with a positive and one with a negative amount.

Unfortunately I have not come up with a way to simplify the data modelling through DAX, so at this stage I would do it in the source. However, it should not be too difficult to achieve this in SQL, for example.

I would be interested to see if anyone comes up with scenarios where this could not work and cases in DAX where the additional rows are a problem (beyond the “too much data”, of course).

The Single Table Model in PowerPivot

In my last post I examined a normalised vs a denormalised model in PowerPivot. In some cases, though, users will invariably avoid this de/normalisation “stuff” and import a single table in PowerPivot. After all, PowerPivot targets Excel users, and Excel users are used to using large workbooks – in most cases a large extract provided by their friendly DBA or database developers. This is why the scenario where PowerPivot becomes a tool to overcome the 1 million rows limitation in Excel will be quite common. But how does it perform, is it wise to do this and when? I will try to answer some of the questions in this post.

Performance

To test performance I mashed up the data in my PTest environment and put it in a single table. Of course, comparing the space on disk between the normalised, denormalised and the single table approaches there was a massive difference with the single table being by far the largest, followed by the denormalised model and the normalised being the smallest. This is what we would expect and is one of the reasons for normalising at the first place.

In a database the single table would be very inefficient since it will lead to lots of IO in many scenarios. However, when imported in PowerPivot the sizes compare like this (variation from denormalised given in brackets):

PTest_Denormalised

5.5 Gb RAM
3.5 Gb File

PTest_Normalised

5.2 Gb RAM (-0.3Gb)
3.3 Gb File (-0.2Gb)

PTest_SingleTable

4.3 Gb RAM (-1.2Gb)
2.8 Gb File (-0.7Gb)

Obviously the elimination of many distinct keys leads to significant space savings. In fact, the single table approach is by far the most efficient when comparing memory utilisation and disk space (when saving the Excel file).

After I did my standard slicer testing, I got extremely good performance out of my single table. No relationships whatsoever seem to be a fast approach to PowerPivot. So, if there were no considerations we could jump into a conclusion that a single table is the best possible option for PowerPivot. Well, the next few section of this post will show why this is actually not the case.

Usability

Let’s quickly add another hypothetical table to our model. What if we need to add some more data and we decide to ask our friendly DBA to give us another extract with more of the same? Now we have two massive tables and we want to analyse the data from both of them. We hit a serious problem – slicing by a slicer built from one of the tables does not work with the other one. This is an obvious scenario for SSAS developers. We need to have a relationship between the tables we use for slicing. In other words, if we want to slice both tables by Date, we need to have a common table which we base the slicer on. Our model should look like the following diagram:

We do not have this relationship and trying to add one directly between the two big tables fails because we do not have a column with distinct values. This is the most obvious showstopper when considering using a single large table I can see.

Size

When we have one table memory utilisation seems very good. However, when we have more than one of these behemoths in memory, we can reasonably expect that due to duplication of attribute data (e.g. we have to store our Products and Customer Names a number of times in memory), we will have a problem – unnecessary duplication of the same data. This is especially true for high-cardinality attributes – that is where we have many distinct values, like in my sample Order Number attribute (20+ million distinct values). In such cases separating these in their own “dimension table” would save memory and disk space.

DAX

How these models compare when building DAX calculations is a topic on its own and I will soon show some comparisons which should answer two questions – which is the most convenient and intuitive model to work with and which one is the fastest. For now it would suffice to say that always working over a large set of values in different tables could be expected to be the slowest (however, I have not done sufficient testing to confirm this yet).

In conclusion, when doing ad-hoc analytics over some extracts which would definitely not need to be mixed up with others, the single table approach works very well with PowerPivot. It certainly extends the functionality Excel offers natively. However, if we are building extensible models, which are to be shared, enhanced and would form the heart of our Team BI, we should avoid the single table because of the other considerations listed above.