SSAS Myths Dispelled

This post is an attempt to dispel a few myths which seems to get repeated over and over among SSAS developers. While the truths are nothing new and have been documented in multiple sources like BOL, SQL CAT whitepapers, books and blog posts, they seem to consistently escape the attention of the wider public.

1 SSAS pre-aggregates data by default

While it is true that SSAS can pre-aggregate data this does not happen by default. SSAS compresses data, indexes data and caches data but it does not pre-aggregate data unless we define aggregations. When I am saying pre-aggregate I mean that SSAS does not automatically know what the Internet Sales Amount for Australia in 2007 is. It needs to get the leaf-level data and sum it up; unless we have built an aggregation on Country and Year for the partition containing the Internet Sales Amount measure. In that case the data is pre-aggregated and ready for retrieval.

2 We can emulate in MDX at no cost Enterprise Edition functionality in Standard Edition

Well, we can’t. We can emulate certain features like LastNonEmpty aggregation functions for example, but it comes at a cost. The cost usually relates to Storage Engine (multi-threaded) vs Formula Engine (single-threaded) execution.

3 SSAS is always faster than SQL Server RDBMS

While it is true that SSAS is faster than SQL Server RDBMS in many cases, this does not always hold true. A particular area in which the relational engine beats SSAS is the retrieval and processing of low-level granular data. SSAS usually beats the RDBMS when it comes to ad-hoc access to aggregated data.

4 MOLAP is always faster than ROLAP

If you read SQL CAT’s “Analysis Services ROLAP for SQL Server Data Warehouses” whitepaper you can see that after careful tuning ROLAP can be faster than MOLAP. Not always, but sometimes – enough to claim that it is not true that MOLAP is always faster than ROLAP. This ties a bit to the previous myth and proves that a well tuned RDBMS can perform very well with aggregates.

From the paper:

“At last, SQLCAT’s redesign and optimization efforts paid off. The ROLAP cube was finally ready for performance testing, and thanks to the amazingly fast performance of the relational SQL Server engine on top of a super-fast storage subsystem, the results looked better than expected. To everybody’s surprise, the ROLAP cube outpaced the MOLAP cube in 45 percent of all queries right from the start (see Figure 14). Only 39 percent of the queries showed substantially slower response times in ROLAP mode (more than twice the amount of MOLAP time) and 16 percent showed moderate performance degradation (less than twice the amount of MOLAP time).”

5 Usage Based Optimisations do not work well

In SQL Server Analysis Services 2008 the Usage Based Optimisation (UBO) algorithm has been redesigned. Now it works, and it works well. It does not create redundant aggregations and in general performs much better. Building UBO aggregations has always been recommended by Microsoft and even more so now.

6 Rigid attribute relationships boost performance

Whether an attribute relationship is Rigid or Flexible does not actually improve performance at all. Not query performance. A wrong choice here only affects processing of partition indexes. If an attribute relationship is static, setting it to Rigid means that you do not have to process partition indexes when you update the dimension. This is all the benefit you get from Rigid relationships. Going too far and marking changing relationships to Rigid may have a very negative impact as a change will prompt a complete process of the partition data and indexes, which will take much longer than updating just the indexes. Either way, there is no difference during query execution.

7 MDX and DAX are hard

I believe that this particular myth stems from the fact that we get to compare MDX and DAX to sweet and fluffy languages like SQL and C#. It all depends on the vantage point. Take the following “Hello world!” program in Malbolge for comparison purposes:

(‘&%:9]!~}|z2Vxwv-,POqponl$Hjig%eB@@>}=<M:9wv6WsU2T|nm-,jcL(I&%$#”
`CB]V?Tx<uVtT`Rpo3NlF.Jh++FdbCBA@?]!~|4XzyTT43Qsqq(Lnmkj”Fhg${z@>

MDX is not all that bad from a Malbolge developer’s point of view, is it?

Advertisements

Custom Groups in Excel 2007 – Error

I just finished digging around a particular issue with Excel 2007 (some versions) and SSAS Pivot Tables. In brief, the issue was that a user could not use the custom groups functionality which Excel provides because she got an error saying:

“The query did not run, or the database table could not be opened. Check the database server or contact your administrator. Make sure the external database is available and hasn’t been moved or reorganized, then try the operation again.”

I added her to the server administrators, but the message persisted. After profiling I noticed that the MDX generated by Excel 2007 for this operation read:

CREATE SESSION CUBE [Cube_XL_GROUPING0] FROM [Cube] ( DIMENSION [Cube].[Agency].[Agency Hierarchy] HIDDEN AS _XL_GROUPING0,DIMENSION [Cube].[Agency].[Flag],DIMENSION [Cube].[Agency].[Region],DIMENSION [Cube].[Collection].[Application],DIMENSION [Cube].[Collection].[Application Code],DIMENSION [Cube].[Collection].[Data Collection Code],DIMENSION [Cube].[Data…

Error: “Parser: The syntax for ‘DIMENSION’ is incorrect.”

I have highlighted the problematic part – the MEASURE part of this expression was missing. A correct MDX statement issued by another instance of Excel 2007 running on a different machine showed:

CREATE SESSION CUBE [Cube_XL_GROUPING1] FROM [Cube] ( MEASURE [Cube].[Value – Data Integer Quarter] HIDDEN,MEASURE [Cube].[Value – Data Integer Semi] HIDDEN,MEASURE [Cube].[Value – Data Integer Annual] HIDDEN,MEASURE [Cube].[Value – Data Integer Month] HIDDEN,MEASURE [Cube].[Value – Data Real Quarter] HIDDEN,MEASURE [Cube].[Value – Data Real Month] HIDDEN,MEASURE [Cube].[Value – Data Real Annual] HIDDEN,MEASURE [Cube].[Value – Data Money Semi] HIDDEN,MEASURE [Cube].[Value – Data Money Month] HIDDEN,MEASURE [Cube].[Value – Data Real Semi] HIDDEN,MEASURE [Cube].[Value – Data Money Quarter] HIDDEN,MEASURE [Cube].[Value – Data Money Annual] HIDDEN,DIMENSION [Cube].[Agency].[Agency Hierarchy] HIDDEN AS _XL_GROUPING0,DIMENSION [Cube].[Agency].[Pub Pte Flag],DIMENSION [Cube].[Agency].[Region],DIMENSION [Cube].[Collection].[Application],DIMENSION [Cube].[Collection].[Application Code],DIMENSION [Cube].[Collection].[Collection Code],DIMENSION [Cube].[Element].[Common Name],DIMENSION [Cube].[Element].[Data Element Code],DIME…
Here we have the cube measures as a part of the CREATE SESSION CUBE statement and this makes it a valid one. The reason for this seems to be the fact that all the physical measures in the cube were hidden and only one calculated measure was shown to the users. Excel (2007 Enterprise) seemed unable to find them, so the fix was easy – creating a visible dummy physical measure and using a scope assignment to make it work like the calculated one. Now Excel merrily creates valid MDX and my user is happy.

I understand this will be a very rare problem, but it takes some time to investigate, so I hope the post may help someone out there.

Colour Codes in SSAS

When we apply colour formatting to measures in SSAS we get a “strange” colour code for our expressions from the BIDS colour-picker. For example, if we want to colour a certain measure in Red, we can write:

CREATE MEMBER CURRENTCUBE.[Measures].[Red Measure] AS
This = 1,
FORE_COLOR=255;

If we use the colour picker we can get more complex numbers for more complex colours. If we want to apply the colour with RGB of  (128,128,192), the colour picker generates the code 12615808 for use in the FORE_COLOR function.

Constructing our own color code is not all that hard. In fact, all we have to do is the following:

1. Multiply the Blue decimal code by 65536 (which is 2^16)
2. Multiply the Green decimal code by 256 (yes, 2^8)
3. Add the numbers from step 1 and 2 and the decimal code for Red

In other words, we can use the following formula:

c = r + 256*g + 65536*b

Where c is the colour code we need and r,g,b are the decimal codes (between 0 and 255) for each colour channel.

If we apply this to our (128,128,192) colour, we get:

c = 128 + 256*128 + 65536*192 = 128 + 32768 + 12582912 = 12615808

And this is exactly what we get from the colour picker control in BIDS.

As Chris Webb points out in the comments section below, we can also write FORE_COLOR=RGB(128,128,192) instead of calculating the colour code manually, which is way more practical. 🙂

I wish such little bits and pieces are better documented on MSDN as the current page in regards to the BACK_COLOR and FORE_COLOR properties is fairly basic: http://msdn.microsoft.com/en-us/library/ms145991.aspx.

UPDATE (01/07/2011):

Considering that RGB() is a VBA function, I tested the performance we get from using it. I have a simple measure, which I am formatting with an IIF expression. I tested three scenarios – with no FORE_COLOR formatting, with FORE_COLOR and numeric codes, and FORE_COLOR with RGB function. Each query was run 3 times on warm cache and returned a result set of  500,000+ cells. The results were as follows:

No FORE_COLOR: 2.3 seconds

FORE_COLOR=12615808: 5.3 seconds

FORE_COLOR=RGB(128,128,192): 17.7 seconds

In other words, applying a VBA function for colour formatting severely impacts the performance of our queries. Thus, if you really need to format the colour of your measures, you may be better off doing it the hard way by converting the RGB values to the colour codes the same way the colour-picker control does in BIDS.

UPDATE2 (04/07/2011):

As Valentino Vranken commented below, you can also use hex codes for the colours. However, there is a small twist. The codes need to be in BGR, not in RGB format. So, Red becomes 0000FF. The actual format of the hex expression is FORE_COLOR=”&Hxxxxxx“. For Red we can use FORE_COLOR=”&H0000FF”. There is no performance difference between hex and decimal codes, so it is up to your preference whether you go with one or the other. Converting 0000FF to decimal also shows the same decimal number (255) as performing the math above and if you already know the BGR hex you can simply convert it to a decimal.

Database Compression Effects on SSAS Processing

Just recently I was optimising the processing time for a rather large cube on my laptop. The hardware I used was not tremendously powerful (a mobile i5 CPU, 8Gb RAM and one 7200 RPM hard disk) and having a 500M rows fact table proved to be somewhat of a challenge for it.

The set up was fairly typical – a large fact table, with a few dimensions in a typical star schema. The goal was to get it to process as fast as possible.

After ensuring all best-practice advice from Microsoft (and in particular SQL CAT) was followed, I managed to get the processing down from 7381.075 seconds to 5801.536 s (approximately 21% improvement). However, one thing still did not look quite right – the CPU utilisation during the partition processing phase.

Considering my hardware configuration it was easy to deduce that the HDD would be more of a bottleneck than the CPU. There was only one HDD and everything, including the source database and the SSAS files, was on it. During the partition processing reading and writing is happening simultaneously, which is not good for performance. At the same time in my case the CPU utilisation was on around 75% with 25% to spare, which was not great. Ideally, I would like to see my CPU on 100%, or very close to that to ensure that I am utilising all of its power. However, I had no way to add more disks.

Luckily, in SQL Server 2008 (Enterprise Edition) we have the option to compress database tables, or partitions. We do get a performance hit on the CPU when we (de)compress the data, but it takes less space on the disk and thus requires less IOPS to read it. It sounded like a perfect fit for my problem. After estimating the benefits in terms of size row and page compression gave me, I decided to go with the heavier option since even its marginally better (5%) reduction of size over row-level compression was desirable. After waiting patiently for the compression operation to finish, the overall database size did drop to around 30% as predicted.

I run my SSAS processing script again and it completed in 4805.408 seconds – approximately 17% better than the previous run, which is a very significant improvement. The CPU utilisation went up to around 97%, which also meant that my blunder with the page vs. row compression produced, generally, good results.

One thing we must always consider is whether the bottleneck is in the IO. A few months later I replaced my hard disk with a new SSD (OSZ Vertex 2), which increased the IO performance of my laptop dramatically. When I tested SSAS processing tasks I could almost always see my CPU steady at 100% without any compression. If I were to compress my fact tables I would expect to get worse performance because my CPU would struggle to both decompress and process the data at the same time.

In conclusion, you must know what your bottleneck is before you opt for a solution like compression. It comes at a price, but the price may well be worth paying if your system is not well balanced.

Debugging SSAS Cubes – a Few Tips

There are a few techniques which can contribute to this topic and I will list some tips which seem to elude many people experiencing problems with their implementations.

1. Do not use the Cube Browser and Excel for troubleshooting

Yes, it is convenient to just drag-drop some hierarchies and measures in the Cube Browser, or in Excel, but when you have troubles with your results these tools will obscure the results. This is happening because every client tool has to generate MDX and by doing so they often generalise the approach and add “nice” language constructs like VisualTotals, or always use a subselect clause. There is nothing wrong with checking the end-result of your work with tools showing what your users see, but when your calcs don’t behave the way you expect (especially when filtering things), I would definitely stay away from GUI clients. Instead you can write some MDX in SSMS and pull exactly what you need in a clean and straight-forward way. We can play with swapping WHERE clauses for subselects, add calculations directly in the queries, etc – in brief, we get the full calculation power of MDX in our hands for troubleshooting. Additionally, next time you ask a question someone else you can start with “I have this MDX query…” instead of: “I have this pivot grid in Product X…” – it is much more likely that you will get a meaningful response.

2. Use SQL Server Profiler

If your MDX works perfectly fine against your cube, but when you use Excel 20xx you get funny results, there is no better way to find what the problem is than to run a trace with SQL Server Profiler, capture the MDX the tool has generated and then work with that to debug – again in SSMS. Different versions of Excel generate different MDX and the difference can be very important. When using other tools the differences become even larger. The only way to see what your tools does (other than talk to the people who have built it) is to capture its SSAS queries.

3. Ask for help

Once you have arrived at the conclusion that you need help (e.g. after exhaustively searching for the problem online), you can quickly get an answer at a few forums. My preferred one is MDSN, which has a dedicated SSAS section. Also, you can ask at the www.ssas-info.com forum section, or even at StackOverflow. Either way, you will need to know what the versions of each product you are using are (especially SQL Server version, service packs and updates applied) and ideally – post your MDX (if you have 10 pages of queries it would help to somehow abbreviate the repeating sections instead of copy-pasting the lot). Do not also forget that the people who are reading your questions do not know your dimensional model and what features of SSAS you are using (e.g. LastNonEmpty, unary operators, etc.) – make sure that you describe any specifics – otherwise everyone would assume a typical star with a straight-forward implementation in SSAS. These approaches, together with being courteous and good with spelling and grammar, will help attract fellow developers, who may know a little SSAS quirk, or MDX feature which resolves your issue.

4. Use Adventure Works

Replicating you problems in Adventure Works is another great way to ensure you get an accurate answer. Because typically everyone on the forums has access to Adventure Works, showing some MDX which results in an unexpected for you outcomes is the best way to pinpoint the problem eliminating all question marks which your specific model and data could be introducing. Additionally, you will probably get an answer which includes some sample code and you will be able to run it immediately and see the results.

SSAS to BISM – Recent Developments

There was a fair bit of FUD around the future of SSAS and just now it got officially dispelled by both TK Anand’s and Chris Webb’s posts on the roadmap ahead of SSAS. If you haven’t read these I would definitely recommend a thorough read through both (yes, TK’s is a bit long, but we did need all of it).

After the confusion has been more or less sorted out, a brief summary could go along the lines of: “We just got an enhanced SSAS“. I have been asked a number of times about the future of SSAS and UDM. So far I seem to have gotten it right – we don’t get a tremendously successful product replaced – instead we get a new and exciting addition to it. Rumours have it that the SSAS team will soon get down and dirty with more work on all of the components of SSAS – both multidimensional and tabular. What can come out of this? – a unique mix of on-disk and in-memory analytics. Edges may have to be smoothened, and in the end of the day we get more, not less.

What caused the confusion – well, in my opinion the SSAS team may be the greatest in analytical tools but in this particular case I think that the communication from Microsoft to us was not up to par. In all of their excitement about the new toys they are building for us they did not accurately draw a roadmap, which lead to the rumours. I hope that in the future this won’t happen and the recent posts by the SSAS team show a lot of improvement in this regard.

All in all we get a BI Semantic Model – encompassing both multidimensional (previously known as UDM) and the new tabular (in-memory) modelling. These two are integrated in one BISM, which allows us to pick and choose the tools we need to deliver the best possible results. All other tools in the stack will eventually work equally well with both models and the two models will integrate well together. Of course, this is a big job for the team and I hope that they succeed in their vision since the end result will be the best platform out there by leaps and bounds.

As of today – the future looks bright and I am very pleased with the news.

Load Test Your SSAS Instance

If you are wondering how performant your SSAS box will be with multiple users executing various queries in an extremely cool visual way, you now have the option to go to codeplex and download the new AS Performance Workbench by Rob Kerr from BlueGranite. It is awesome! And to prove it is – have a look at the following YouTube demo (6m 24s): http://www.youtube.com/watch?v=Kn4-3JTyDXM.

For any comments, requests and lod cheering you can also visit the project discussion page: http://asperfwb.codeplex.com/discussions.

Enjoy!