A Guide to Currency Conversions in SSAS

In this post I will try to describe different ways we can do currency conversions in SSAS, which should cover most, if not all, requirements. Hopefully, it will also provide some best practice advice in the process, as well.

As a background and reference, I will use some other blog posts, most prolific of which are Christian Wade’s:



The first one deals with currency conversions and compares the Business Intelligence Wizard approach and Measure Expressions, clearly favouring the Measure Expressions (MEs). The second post explores MEs and their strengths as opposed to MDX calculations. Both posts are very interesting and useful. A slight inaccuracy can be found in the summary section of the first post, which describes the MEs as stored on disk, which is untrue. In fact MEs are not stored on disk and are calculated at runtime. Teo Lachev explains their behavior here:


And another reference to MEs can be found in the SQL Server 2005 Performance Guide:


Last evidence, and possibly most helpful for me was the confirmation about their behavior I got from Gerhard Brueckl, Philip Stephenson and Darren Gosbell in this MSDN Forum thread:


Darren Gosbell also emailed me with another little clarification, which could have big impact on your solutions – and that is the fact that no aggregations are used for a measure group in SSAS where at least one measure has a ME defined for it. This could be very important in some cases. Teo Lachev has blogged about this limitation here:


Since we have some background knowledge of how currency conversions can be done, I will continue with a brief discussion of how currency amounts can be stored in a data mart.

In the vast majority of cases, a business works with a “base currency”, which is the default currency used to conform all currency amounts throughout all transactions. When a currency conversion needs to be made, typically we would have to multiply or divide the “base currency amount” by some “currency rate”, which will give us as a result the amount in a non-base currency amount. To implement this approach, we could just follow Christian Wade’s ideas of using Measure Expressions, which would give us the best performance (keeping in mind, of course, the limitations of using Measure Expressions).

Another approach is to store both base currency amount, as well as an amount for all the most commonly used currencies throughout the organisation as physical measures. As a result we end up with a few measure columns corresponding to the different currencies (e.g. USDAmount, EURAmount, AUDAmount). Then we just add these to our measure group and we can build a SCOPE statement, which gives us the correct measure when using our Currency dimension. If we want to convert to a currency other that the ones we have already converted, we need to resort to the previously mentioned approach, accepting one of these currencies as a base currency. Because we work with physical measures in the majority of cases, this implementation solves some problems with performance. However, it suffers from increased storage space requirements, which could (for a very large implementation) be severe. Also, if we have multiple measures we need to convert, we need to store [Number Of Measures] x  [Number of Frequently Used Currencies – 1] more measure columns in our database, and subsequently in our cube. When I am saying “solves some problems with performance”, in fact our problems are solved only when we use the currencies we have the converted amounts for. In all other cases, we are at the worst possible case – complete conversion calculation of our measures.

There is a third tactic, which I have recently been working on. Together with the previous two it could potentially yield best possible results. The idea is to store a base currency amount and a local currency amount in another column, as well as adding a Local Currency dimension to the cube. The Local Currency measure contains the base amount converted to the currency, which is “local” for the data. In example, if we have a company which has offices in Sydney and London, the local amounts stored against the Australian offices (based on business unit or geography) will be in AUD, while the amounts for the English ones will be in GBP. However, the base currency could be AUD, in which case in our BaseAmount column the amounts will always be in AUD. Once we have set this up we can do the following:

  1. For reports in the base currency: Use the base currency amount only
  2. For reports in non-base currency:
    1. Take the LocalAmount from the fact table, where the local currency is the selected currency
    2. Convert the BaseAmount from the fact table, where the local currency is not the selected currency
    3. Sum the amounts we get from the previous two steps

In my current solution I have the following cube (showing only the Dimension Usage tab, which I believe illustrates the structure best):


Note that I have a Currency Conversion measure group, which stores conversion rates from base currency for each date. It has a many-to-many relationship with the other two measure groups in my cube, while these two measure groups have a Local Currency dimension related to them as described above.

Then the MDX in the cube script for my Deal measure (the physical measure is called [Amount – Trade]) is:

/* This calculation will pick the Local Currency Amount from the fact table with no currency
     conversions applied */
CREATE MEMBER CURRENTCUBE.[Measures].[Local Trade Currency Amount]
    (LinkMember([Currency].[Currency Code].CurrentMember,
                [Local Currency].[Currency Code]),
    [Currency].[Currency Code].[AUD],
    [Measures].[Local Trade Amount]),

/* Here we pick all other amounts in Base Currency (other than the one selected by the user)
     and then we pull the converted amounts for them (calculation done in a Measure Expression)*/
CREATE MEMBER CURRENTCUBE.[Measures].[Converted Local Trade Currency Amount]
    SUM([Local Currency].[Currency Code].[Currency Code] –
            LinkMember([Currency].[Currency Code].CurrentMember,
                        [Local Currency].[Currency Code]),
        [Measures].[Trade Amount]),

/* In this combined measure we combine the previous two to retrieve the overall amount
     in the selected currency */
CREATE MEMBER CURRENTCUBE.[Measures].[Amount – Trade]
    SUM({[Measures].[Local Trade Currency Amount],
         [Measures].[Converted Local Trade Currency Amount]}),
    FORMAT_STRING = “Currency”,
    VISIBLE = 1;

When we generate a report in the base currency, the performance is best-possible as no conversions are made (if we utilize the Direct Slice property as described in Christian Wade’s blog posts above). The same holds true for reports in a non-base currency, where 100% of the data can be retrieved from the LocalAmount measure. The worst-case comes when we request a currency, which is not a Local Currency in any sub-slice to the one requested by our query. In this case, we resort to the first approach, where we convert every amount and then sum it up. If some of the data can be retrieved as a LocalAmount, and some of the data cannot, we are in between the best and worst cases and performance will depend on the amount of conversions which need to be performed.

I consider this a useful solution as in many cases reports will either be specific to an office/country and are generated in a local currency only; or are “global” and are using only the base currency for the organisation. Therefore, using this approach, we get best possible performance with no severe trade-off in storage space.

In the end, it depends on the specific organisation and requirements, but we can always combine the second and the third solutions, storing a few converted amounts as physical measures, including a LocalAmount as another one, and converting to a different to these currency with Measure Expressions only when absolutely necessary. This way, we essentially take a “best of both worlds” approach and can obtain best possible results from our solution.

Ad-Hoc Ranges in SSAS

We can easily build ranges in MDX with the range operator “:”. We can also easily create a Range dimension in SSAS and use it to slice our data. This post is not about either of those. I would like to discuss the scenario where we need to restrict an ad-hoc query (e.g. PivotTable in Excel) by a range of values. Usually, we would tell our users to just select the values they need. This works. However, if our users do not want to be selecting/deselecting many values, we can provide an easier way to do this.

Let’s assume we have an Age dimension for a Kindergarten cube. The Age dimension contains the ages of the children, which can be from 1 to 10. Our requirement is to be able to select low and high limits of ages for a Pivot Table in Excel, so that the data in the Pivot Table is sliced for the range of ages between those limits.

To implement this in practise, we can build two extra dimensions – Age – Low Limit and Age – High Limit, which contain the same members as the Age dimension and then use them to slice our cube. Because the data is the same for all three dimensions, we can use a couple of database views on top of the Ade dimension table, thus ensuring that all three are in sync:

After that, we build two bridging tables BridgeLowLimit and BridgeHighLimit between Age and Age – High Limit, as well as between Age – Low Limit:

The data in these Bridging tables maps each Low and High limit to all Age members which are either lower (for High limit) or higher (for Low Limit) than the limit members:


Now, we can define many-to-many relationships between the FactAmount (our fact table), through the Age dimension and the Bridging tables to our limit dimensions as follows:

After this, we can hide the two measure groups for the Bridge tables from the users:

Now, we are ready to process our SSAS database. After that, we get the following in Excel:

If we place the Low and High limits in the Report Filter, Age on rows and our Amount measure on columns we can limit the Age members displayed by changing the filter members. Note that only the lowest member in the Age – Low Limit dimension and the highest in the Age – High Limit dimension matter – everything in between those (in case of multi-selects) effectively get ignored.

There are certain problems with this solution. If we place the limit dimensions on rows and we select multiple members from each dimension, we get the following:

This can be confusing for the users if they want to get distinct ranges like 1-3, 3-6, 6-10. Unfortunately, it is not possible to build a cube calculation which hides the irrelevant members as we do not know what the users have selected in Excel. From there, we cannot determine what members are in the query scope, and from there, we can’t pick only the ones we need (e.g. the ones with the lowest distance between the two limit members).

If we place the two dimensions on Rows and Columns, we can get a nice matrix and this makes a bit more sense:


Also, for large dimensions (e.g. Date), this can be quite slow, as the number of rows in the Bridge tables will grow. In example, if we have 10 years in our Date dimension, and we map them the way I just showed we will end up with approximately 6-7 million rows in each Bridge table, which can be quite prohibitive from performance point of view. However, for smaller dimensions (in my opinion everything under 1000 members would be ok as it would generate approximately up to 500,000 rows in each Bridge table). Therefore, if our users insist on this functionality – especially when they have a flat list of 100-1000 members, and they frequently select ranges out of this list – we have a way of answering their need.