Set Operations in MDX – UNION and EXCEPT

I just read an article MDX: Except written by Vincent Rainardi.

It shows set subtraction by usng the EXCEPT function (as you could derive from the title, no doubt). I have always been a fan of using the “-” and “+” operators instead of EXCEPT and UNION where possible because in my opinion they give us better visiblity of the intention we are putting behind our MDX expressions. However, EXCEPT and UNION have an advantage over “-” and “+” – the third parameter ALL.

In BOL we can see that both of these functions can be used like this: UNION/EXCEPT(set1, set2, ALL). If we skip the ALL keyword, we would get exactly what we would get with +/-. Some examples:

SET1: {a, b, c}
SET2: {c}

SET3 = UNION(SET1, SET2) = SET1+SET2 = {a, b, c}+{c} = {a, b, c}

But if we use ALL, we would get duplicates in our result set:

SET4 = UNION(SET1, SET2, ALL) = UNION({a, b, c}, {c}, ALL} = {a, b, c, c}

The difference here is the duplicates, which get preserved in SET4 because of ALL. And this is also where EXCEPT is different to “-“:

SET5 = EXCEPT(SET4, SET2) = SET4-SET2 = {a, b, c, c} – {a} = {b, c}

while

SET6 = EXCEPT(SET4, SET2, ALL) = {a, b, c, c} – {a} = {b, c, c}

As you can see, unlike in set math where a set cannot have dupicates, in MDX we can. Therefore, if we are in a situation where we need to preserve these, we have the option of using the UNION and EXCEPT functions with the ALL parameter.

I am using these concepts on every-day basis and I have found that mastering them gives me a very powerful way of solving many MDX problems. I hope that the examples are suitable and easy to understand – when I read the article after I have just written it I sound a bit like my math lecturer from uni (I wish he knew and taught MDX), who was a good guy, so I guess there is nothing wrong with that 🙂

Using the DSV to its Full Potential

The Data Source View in Analysis Services is a very powerful abstraction of the data source and it can help us overcome some scenarios in an easy and clean way. Many times we look for MDX or programmatic solutions to problems, which can be tackled best in our data. While for complex tasks we would be better off extending the ETL process, some simple ones can and should be implemented in the DSV.

 

As an introduction to the topic I would like to explain briefly what the DSV actually is. It can be conceptualised as a database view on top of the data source. By default all tables which we need for building the Analysis Services database (typically dimensions and facts) are appearing in the DSV as table bindings (exactly as if we do a SELECT * FROM Table). If we have no foreign keys defined in our database, SSAS will not show us the relationships in the DSV. However, we can define logical relationships in the DSV, thus connecting the tables on related columns, which are then used for automatically determining dimension relationships to the measure groups.

 

There are two important ways to modify the DSV, which allow us to add more columns to the existing tables and to modify the way the existing columns are shown:

Named Queries

 

 

 

If we right-click on a table in the DSV, we can select to replace the table with a Named Query. A Named Query is essentially a T-SQL statement, which is equivalent to a database view definition. By utilising Named Queries we can alter the way we see the tables and their column in SSAS. In example, we could concatenate columns, implement CASE logic, etc. Named Queries can be thought of as equivalent to database views.

 

Named Calculations

 

A named calculation is a SQL statement which adds a column to a table without modifying the table binding. It gives us an easy way to define a new column without changing the whole query. The statement defining the column is in T-SQL and it behaves the same way as a new column in a Named Query (or a SELECT statement). If we just want to add one more column (e.g. Display Order, Code+Description concatenation, etc.), we can simply define a Named Calculation. Also, as the name suggests, Named Calculations can be commonly used for defining a leaf-level calculation without modifying a large fact table’s SELECT statement in a Named Query.

 

The column we define here appears in both the DSV table and in the Dimension Designer window:

 

These two DSV functions can be used in many scenarios. Most importantly, there are a few when they yield better performance, faster development and easier maintenance:

Leaf-level calculations

If we have the common requirement to perform leaf-level calculations and then aggregate this up the hierarchy, as opposed to aggregating and then calculating, the best way to do this is in a SQL statement on the fact table. Alternatively, we can do this in and MDX statement:

 SUM(DESCENDANTS(Dim.CurrentMember,,LEAVES), MeasureCalc)

However, it comes at a price. Since SSAS would have to do the calculation for each leaf and then sum this up the hierarchy, this could take a long time to perform. Also, SSAS would not be able to use pre-processed aggregations and the calculations will be done at execution time. To avoid this we could add a new column to the fact table and do the calculation there (in SQL), using the column as a new measure in the cube, which can then be aggregated by SSAS as any other measure. The performance gain is usually substantial and using a Named Query or a Named Calculation should always be the preferred option.

Description Attributes

Often we need to perform a concatenation between different dimension attributes, which we can use as a Description attribute while slicing the cube, or when providing reports from the SSAS database. A very easy way to achieve such a requirement is to use our DSV and concatenate the column we need in a new column in the dimension table, which we can expose as a new attribute in the dimension. A task such as concatenating an Account Code and Account Description into an Account Long Description (i.e. [Account Code] + ‘-‘ + [Account Description]) becomes very easy to implement within the DSV without modifying the ETL or any tables.

Composite Keys

Sometimes we need to build unique keys for attribute column in a dimension. A good example is a Date dimension, which does not have unique keys for non-leaf levels such as Month. Often developers have Month Key of 1,2,3-12. This does not make a good Month key in SSAS as it is not unique for higher levels such as Year, Quarter, etc. There are a number of ways to tackle this common scenario. While the recommended approach would be to build a concatenation between Year-Quarter-Month as a Month Key in the dimension table, we can also achieve this by either selecting all of the columns as key columns for the attribute in the dimension attribute properties. However, this would give us a concatenated key in MDX and this could sometimes be undesirable. A yet simpler and cleaner solution is to concatenate the relevant columns in the DSV by using a Named Query. Instead of the typical

SELECT col1, col2,.., MonthKey, colx, coly, coly FROM DimDate

we can write

SELECT col1, col2,…,YearKey+QuarterKey+MonthKey AS MonthKey, colx, coly, coz FROM DimDate

This way we can use the MonthKey column directly as a key for our Month attribute.

While this is useful for a Date dimension, it can also be useful for any other composite key definition in our dimensions.

Other possible applications of DSV Named Queries and Named Calculations are the implementation of

  • Sort Order attribute, in cases when we need custom sort of the dimension attributes
  • Restricting the data which comes into the cube dynamically based on a certain condition (think of a Date dimension, which includes only relevant periods)
  • Combining tables – by a SQL join
  • Replacing 0s with NULLs (the opposite can be done automatically in SSAS) for our measures

Basically, in a DSV we can “correct” our data to make it suitable for our cube without changing the ETL.

Last but not least, we can also transform tables to conform to a star-schema-like design. If we want to show a proof of concept on top of a normalized OLTP database, we could avoid the ETL complexities, as well as building a datamart, and use SQL to join/split tables in dimension and fact tables, which are suitable for cube development. While this could work in post-POC scenarios, it would be better to take a cautious approach to it as there are many scenarios when it would either not work, or will be too slow.

And a word of warning – your DSV could become slow because of over-use of complex Named Queries. This could be painful when minimising cube processing time is crucial, or when the DSV starts timing out and queries take hours to execute. Luckily, in most cases we can simply move these large queries forward – to the ETL where we have more time and better tools (e.g. SSIS).

Avanade on Tech Ed Australia 2010

A few days ago I got asked by the IM Director for Avanade Australia Lionel Gomes Da Rosa whether I would like to co-present with him on Tech Ed Australia 2010 on a very interesting topic – “Budgeting, Forecasting and Performance Management with the Microsoft BI Stack”.

I have had a few planning and forecasting implementations in the past (one of which got presented on the Microsoft BI Summit 2007 – not by myself unfortunately) and I am looking forward to the chance to co-present at this exciting event this year. It will be a 300 session (meaning that it will not be an intro and the audience would need some technical knowledge to grasp the concepts in their entirety) and this makes it even more attractive from my point of view as it will be targeting devs rather than the general public. Nothing wrong with the general public, of course – just that I am a dev at heart 🙂

See you there!

PivotViewer meets World Cup 2010

Just got this link in my inbox and thought the BI community may be interested (especially if you live in Germany, Spain, Netherlands or Uruguay – the semi-finalists of this year’s Fifa World Cup) in this demo of thenew PivotViewer SilverLight control:

http://pivot.metia.com/worldcup/

Even if you are not a soccer fan, you will surely still find it very interesting.

The official site where you can download the control is here: http://www.silverlight.net/learn/pivotviewer/

Passing database names to SSIS stored procedures

In the rare cases when we use dynamic SQL and want to use a database name in our code, we are better off avoiding hard-coding them. Unfortunately, I could not find an easy way to access a connection manager’s database name and on my current project the catalog name is not in the SSIS configurations XML file. Therefore, I had to resort to a little trick to pull the database name out and pass it to a stored procedure. In brief we can do the following:

1. Create a user variable database_name

2. Create an Execute SQL Task using the connection manager we want to get the database name from, which does:

 SELECT db_name() AS database_name
 
3. Map the Single Row result set to our database_name variable 

4. Place the task created in the previous step before any components which would be using the variable.

5. Pass the variable to our dynamic SQL stored procedure

 

 

There we go – a stored procedure configured in the SSIS package configurations – a bit better than just hard-coding the name.

PowerPivot DAX Measures in a Data Source

OK, my doubts aside, this is my first post about PowerPivot. Obviously the demand is high and the perspectives bright – so here we go…

There are a number of posts about using PowerPivot as a datasource in PerformancePoint and Reporting Services. You can find two links below (both at TechNet) and I will not repeat the content there:

 

There you can find a detailed step-by-step approach to using published to SharePoint PowerPivot workbooks in your reports. One thing which is not explained, though, is that you can also use any published DAX measures as report data sources, as well. They appear as physical measures in the PowerPivot Analysis Services instance and can be directly used in the reports. I found this fascinating, as now we can actually integrate even user logic into our reports.

In example, today I was working on a small demo of PowerPivot and I created a PowerPivot report integrating a relational data source and an OLAP data source from two completely different systems, which then got related to each other by State/Province . After that, I created a DAX ratio measure, which showed some relation between the Sales measures from each database. After publishing the workbook to SharePoint, I opened Report Builder 3.0 and to my surprise (well, I expected to see only physical measures for some reason), I was able to pull the DAX measure which I had just created and beautify the map of the USA based on its value.

The integration between the various components in the Microsoft BI stack is continually improving and with the latest few versions of the various components (labelled 2010 or R2), we are being empowered with richer and more powerful tools, covering a larger array of users – proving that the toolset is the best out there for both enterprise-level and relatively smaller customers.

An ETL Framework from BI Monkey

Today I found out about a cool ETL framework being developed by my colleague in Sydney – James Beresford, a.k.a BI Monkey. He is also writing a blog, which I have seen and read in the past.

Check out his take on what an ETL should look like. He told me that the current version (v1) is suitable for smaller projects and that a new one v2 is coming out soon – so if you like v1 definitely stay posted for v2! And in case you have suggestions – he is pretty cool to chat to and very keen to get the framework further, so don’t hesitate to contact him!