Passing unCONSTRAINED Set and Member parameters between reports in Reporting Services

By default SSRS MDX queries get a StrToMember or StrToSet functions with a CONSTRAINED flag. However, many developers do not quite know why it is there or what it actually does. Books-On-Line contains this statements:

StrToMember

  • When the CONSTRAINED flag is used, the member name must be directly resolvable to a qualified or unqualified member name. This flag is used to reduce the risk of injection attacks via the specified string. If a string is provided that is not directly resolvable to a qualified or unqualified member name, the following error appears: “The restrictions imposed by the CONSTRAINED flag in the STRTOMEMBER function were violated.”
  • When the CONSTRAINED flag is not used, the specified member can resolve either directly to a member name or can resolve to an MDX expression that resolves to a name.
  • StrToSet

  • When the CONSTRAINED flag is used, the set specification must contain qualified or unqualified member names or a set of tuples containing qualified or unqualified member names enclosed by braces {}. This flag is used to reduce the risk of injection attacks via the specified string. If a string is provided that is not directly resolvable to qualified or unqualified member names, the following error appears: “The restrictions imposed by the CONSTRAINED flag in the STRTOSET function were violated.”
  • When the CONSTRAINED flag is not used, the specified set specification can resolve to a valid Multidimensional Expressions (MDX) expression that returns a set.
  • Therefore, if you have a CONSTRAINED flag you have to specify exact members or sets (e.g. [Date].[Year].[2009], or {[Date].[Year].[2009],[Date].[Year].[2010]}). If you omit the flag, you can pass to the StrToMember an expression, which evaluates to a member (e.g. [Date].[Year].[Year].Members.Item(0)), and to StrToSet an expression, which evaluates to a set (e.g. NONEMPTY([Date].[Year].[Year].Members, [Measures].[Amount]).

    The flexibility which removing CONSTRAINED offers can be quite powerful when passing parameters between reports. In example, we may want to pass a parameter to a drill-through report from two different summary reports, where each of those uses a different subset of dimension members, which in turn can be derived by different set expressions.

    Major drawbacks of using this approach is the severe performance hit it leads to, as well as a possible “MDX injection” vulnerability. Since in most cases we would  be using the passed parameters in a subcube expression or on the slicer axis (WHERE clause), this should not lead to as bad performance as we would get if we would use it inside a calculation. However, when we need to use a parameter directly in a calculated measure, we would be better off avoiding an unCONSTRAINED function. 

     Therefore, we may instead use SetToStr on the summary reports and pass a String parameter to a CONSTRAINED StrToSet function in the drill-through report. This way we are first resolving the set expression once and passing it on to the underlying report as a string. We could do that in a calculated measure returning a String, then passed on as a Field to the drill-through parameter. However, in the rare case where we have many rows travelling from the SSAS server to the SSRS server, this could be slow.

    So, whether we use a StrToSet without CONSTRAINED flag, or a String parameter constructed by a SetToStr function, is dependent on the actual scenario, but it is good to have both options in our arsenal of tools and techniques when we need to implement some non-quite-standard piece of functionality.

    Reporting Services 2008 Layout Properties Glitch

    Today while working with a basic sample dashboard in SSRS I noticed that for some reason one of my subreports disappeared: 

     
    While it got rendered on its own in Report Manager, when I placed it within a subreport (with no other containers around it) it just refused to show up. One of the properties I tried was the Layout menu item on the subreport:
     
     

    To my surprise, selecting the “Bring To Front” option solved my problem: 

     

    This one was also hard to understand since it rendered correctly in BIDS, leading me to think that there is something wrong with my browser or report server setup. It seems like report items can be sent behind the report background, which at a first glance seems like a confusing and redundant bit of functionality. I also unsucessfully tried to reproduce this in SQL 2005, so it must be a new “feature”. Also, it happened without any action on my part – I did not edit any layout options prior to its disappearance. 

    EDIT (15/01/2010)
    The issue appears only with a subreport containing a Map. Interestingly, the Map actions – I have a Go To URL on the states still work, but the actual image gets lost somehow. 

    EDIT (21/01/2010)
    Microsoft replied with the following: 

    Posted by Microsoft on 20/01/2010 at 11:25 AM
    Thank you for reporting this issue. It only happens if the Map is the first report item on the Subreport and there are no Data Sets specified. You can work around this problem by placing the Map into a Rectangle report item.

    On the search for the perfect OLAP browser

    Browsing Analysis Services cubes is typically done by power users in SQL Server Management Studio or Excel. However, because of a requirement stating that intranet and non-intranet users need to be able to access and browse an OLAP cube I had to go out there and try to find the best OLAP browser on the market. The client had been using Dundas OLAP Services with mixed success, and had purchased PerformancePoint with ProClarity but never used it because of infrastructure issues. That is why they used Dundas as an interim solution. Dundas’s OLAP Services is a user-friendly tool, but tis performance is not ideal when it comes to large grids, so my client wanted something combining Dundas’s usability with Excel’s performance, which could be used both internally and externally.

    I contacted Microsoft with a question: “What tools can I use?” They came back to me with the following suggestions:

    • IntelliMax Solutions OLAP Browser (an Australian partner)
    • BI Companion
    • Panorama
    • Dundas
    • ProClarity

    I added PerformancePoint to the list, as well as Radar-Soft’s OLAP Browser, so my final list was:

    1. IntelliMax
    2. BI Companion
    3. Panorama
    4. Dundas
    5. ProClarity
    6. PerformancePoint
    7. Radar-Soft

    IntelliMax Solutions

    A sales guy told me that they will organise an online demo for me, and then if I am interested, they will organise an evaluation installation of their product on our servers. That just happened half an hour ago and it immediately became apparent that the product lack one critical piece of functionality – it does not support attribute hierarchies. So, I cannot use anything but user-defined custom hierarchies, because according to the support person, including attribute hierarchies make the grids and reports “too big”. I, however, definitely need these big grids/reports. Otherwise, the tool is nice and simple with very obvious support for Custom Data – you can define this property from the UI, which makes it easier to work on non-Kerberos enabled environments. It also should integrate with PerformancePoint and Reporting Services, but I did not test those parts, because of the aforementioned problem with attribute hierarchies.

    BI Companion

    They were very friendly and responsive. I got an evaluation install, tried it out and it worked fine, apart from a small bug with non-aggregatable dimension hierarchies, which they fixed immediately. I was quite impressed with the product. It seemed to perform slightly better than Dundas, but it also was slightly less user-friendly. The interface imitates Cube Browser and is very feature-rich. Unfortunately, the HTML version did not work with Firefox, so they suggested using a SilverLight version, which was not supported by my client. As one of my requirements was cross-browser operability (IE and Firefox at least), BI Companion became a no-go-to for me.

    Panorama

    Panorama’s NovaView product competed with PerformancePoint as a dashboard creation tool and I have no idea why Microsoft recommended a competitor. I contacted their reseller in Australia and the sales person organised an online demo. I could play with their grid analytics components and it seemed OK. NovaView comes in two versions – a legacy Java version, as well as a new Flash version still in development. The Flash version is quite nice, but still harder to use than BI Companion or Dundas. As a big miss in the current version, Panorama NovaView does not let the user to add/remove dimension hierarchies from its grid columns. It can be done on rows, but not on columns, which is frustrating and ultimately a critical lack of functionality in my case. The Panorama support person told me that “they did not want to release all features yet” and that they have it going in their lab. He also advised me to use the Java version until the Flash version gets updated. I did and found out that the Java version is indeed fast, but not as fast as Excel and ProClarity. Furthermore, Panorama’s product is just too big for our needs and all of its administrative functionality, which includes user management, security and what-not, is just not necessary. Otherwise it looks nice.

    Dundas

    The Dundas OLAP Services OLAP grid is very usable – all the users need to do is drag and drop dimension hierarchies to browse the cubes they have access to. Unfortunately, it is a bit buggy and very slow when the cells, which need to be displayed, hit the > 10 000 range. After performance testing, we found out that about 50-80% of the processing time takes place on the client machines, where the browser renders a huge HTML table. It turns out that it is not only Dundas which has the same issues. Any HTML OLAP tool I tested suffers from exactly the same problem. This includes Dundas, BI Companion and Radar-Soft. Nothing we can do. Dundas supports paging, so the results can be split in a number of pages. It performs better if that feature is turned on, but requires hacking when exporting to Excel, because in order to export the whole grid, by default the user needs to export all the pages individually and then manually concatenate them in Excel. Since the HTML table problem cannot be rectified by Dundas or us, Dundas’s OLAP Services remain what they were intended to be – an interim solution.

    ProClarity

    Dated, but FAST. ProClarity still beats all other solutions with its performance. It was as fast as Excel and much, much faster than anything else. Although it lacks the apparent user-friendliness of some of the other solutions, it does allow users to conveniently browse OLAP cubes. Furthermore, it integrates well with PerformancePoint. Unfortunately, 6.2 is its last ever version. Whether its functionality gets implemented as a part of SharePoint 2010 is still to be announced by Microsoft. By the current way things look, ProClarity is still the best solution for ad-hoc OLAP browsing. It is stable and even though its installation is not quite smooth (e.g. requires IIS to be running gin 32bit mode), it does offer what most users would feel comfortable with. Its drawbacks are: tabbed interface, no drag-drop, and outdated graphics. Not perfect, but good enough. Oh, and it works with Firefox.

    PerformancePoint

    The Microsoft dashboarding tool offers an Analytics Grid part, which can be used for ad-hoc analysis. Unfortunately, it does not allow the users to add more dimension hierarchies on rows or columns. What is does well is drilling up or down the hierarchies and slicing by other hierarchies. Unfortunately this is hardly enough when it comes to full-featured ad-hoc reporting.

    Radar-Soft

    The small OLAP browser Radar-Soft is offering seems fast at a first glance. It would have been a nice small tool if it did not have one major problem – paging. It pages everything. It even creates a number of pages within cells. This improves performance but makes it impossible to use if we want to export a whole grid to Excel. I guess that some of the paging can be disabled, but I am quite sure that, since it is HTML, it will have the aforementioned issues with performance when it comes to displaying a relatively large HTML table.

    Conclusion

    The three HTML solutions – Radar-Soft, Dundas and BI Companion all had the same issue with performance. My guess is that there is no HTML solution which is faster, because in the end it comes to browser performance rather than MDX or .NET implementation.

    Panorama with its Java and Flash versions is maybe a good solution for a full-featured dashboarding and reporting, but is a massive overkill for our circumstances. Also, it is not very user-friendly and not very fast.

    PerformancePoint and ProClarity are my favourite and I believe that Microsoft’s offering provides the best capabilities for our users at least until something better comes from the Microsoft workshop. These integrate nicely with SharePoint and perform very well. While PerformancePoint is very user friendly, I would not say that for ProClarity. Hopefully in the future we will see nicer OLAP browsing by Microsoft based on these two tools.

    A word about SilverLight and Excel

    While SilverLight OLAP browsers are available by various software houses (most notable BI Companion has one in Beta), it is a relatively new technology and was not an option for me as it was not in my client’s SOE. Also, from my testing it does not seem like it is much faster than HTML, but definitely looks nicer.

    Excel is, in my opinion, the best tool for browsing OLAP cubes inside the organisation. It is fast, and provides a rich set of analytics capabilities. There are a lot of discussions about Excel as a report authoring tool on various blogs, but I believe that for ad-hoc reporting there is nothing out there which beats Excel. Despite this, I could not recommend it to my client, as the solution is exposed to external users and letting them connect to our cubes directly though Excel is just impossible considering our security infrastructure. Also, Excel Services does not provide enough ad-hoc functionality, as it does not allow users to modify the Pivot Tables.

    Wishes

    I wish and hope that Microsoft releases a full-featured, user-friendly and performant OLAP browser as part of Visual Studio and .NET or as a part of SharePoint 2010. Such a tool is just a must and a serious miss in the Microsoft BI stack. The combination of Excel+PerformancePoint+ProClarity does the trick for now, but as ProClarity is getting discontinued, there must be something in the future toolset, which takes its place.

    A few other bloggers wrote posts closely related to this topic:

    Richard Lees – Which cube browser for Microsoft OLAP
    Richard Lees – What’s preventing Excel from being the ubiquitous cube browser
    Chris Webb – Proclarity Migration Roadmap (or lack thereof)

    Developing reports for colour-blind people

    According to Wikipedia 8% of all males and 0.4% of all females in Australia are colour-blind to some degree. The percentages are slightly different in the USA – 7% for males and 0.4% for females. It is estimated that these would be similar to other countries in the world, which means that a very high percentage of people may have difficulties distinguishing colours. Therefore, a significantly large part of our potential report users may not be able to interpret some of our graphs and charts correctly (There would be around 400 colour-blind males and 20 colour-blind females in an organisation which employs 10000 people).

    In example, the following chart is quite nice and simple but can be useless to colour-blind people:

    chart-colour

    versus:

    chart-grey

    Similarly, this table is somewhat confusing if we remove the colours:

    table-colour

    versus:

    table-grey

    We have to be very careful with design of KPIs and dashboards in general:

    kpi-colour

    versus:

    kpi-grey

    As we can clearly see from the above examples, not being able to clearly distinguish colours makes our poorly designed charts and tables confusing.

    We should always keep in mind the following considerations when designing and implementing our most-common report items to ensure that they can be used by everyone in our client organisation:

    • KPI indicators must be different in shape rather than just colour
    • Line charts should utilise markers with different shapes
    • Bar graphs should include a marker on the top of each bar
    • Avoid colour-coded pie-charts – they can be extremely difficult to read for a person with even the slightest colour-blindness condition
    • Avoid colour-coding tables – either backgrounds or text colours are usually unacceptable

    Other more general suggestions:

    • Shapes are much more important than colours
    • Greyscale and shades of the same colour are acceptable, as a colour-blind person can distinguish between those

    Of course, even after all our efforts to create reports readable by everyone, we may miss some detail. The best way to ensure that we have done a good job is to test. There are two good and easy ways to do that:

    1. Printing – print the report in black and white and see if all information is well presented
    2. Changing display to greyscale – Windows lets us choose how many colours we want to display on our screen. Choosing greyscale and then playing with our report is possibly the best way to ensure that colour-blind people can use our reports.

    It is fairly simple and easy to always apply these design principles when creating reports. I have found that most organisations are quite happy to include minor tweaks to their dashboards and reports when they understand how important they could be for some of their employees. Furthermore, it helps to promote accessibility to technology regardless of minor disabilities and gender.

    Reporting Services Styles in a Database Table

    Article published in SQL Server Central on 2009/03/26

     

    As it was previously discussed by Adam Aspin in his series Add Styles to Your Reporting Services Reports on SQLServerCentral, it is very useful to be able to change various report attributes, such as colours and borders properties, in a centralised way just like by using CSS files or Skins. Currently, Reporting Services does not allow us to use a similar approach, so Adam showed us how to apply a “style” to a report by using custom code in Reporting Services – either by embedding the code for all attributes directly in the report, or by referencing an external source. However, there is another way to handle such attributes and that is by storing them in a database table. There are certain advantages in doing so, but there could be some disadvantages, as well.

     

    I will proceed to show how to build a report utilising this approach and then I will discuss its strengths and weaknesses.

     

    Database Table

     

    I would suggest to keep your database tables storing values such as report attributes, Reporting Services environment settings (for using dynamically generated javascript) and other dynamic report properties separated by naming them in a unified way. I have used the CTL (for control) prefix in the past. For illustrative purposes I will provide a simple definition of a database table called CTLReportColours:

     

    CTLReportColours (
    HeaderColour nvarchar(10),
    FooterColour nvarchar(10),
    BodyTextColour nvarchar(10)
    )

     

    As actual values we can store either the Reporting Services colour name (WhiteSmoke, SteelBlue, etc), or the colour codes (#00AA00, #CCBB66, etc). I am mentioning this because I have had issues with printers in the past, which recognise Red as Pink and changing to hex colour codes fixed the problem. This should be the topic for another article, so I will drop the subject here.

     

    Stored Procedure

     

    We can either retrieve the values from the database table in a stored procedure we are already using on our report, or we can create a new one, specifically built for getting report attributes. We can use the first approach if our report uses one data set only. Then we can assign all report items to that data set and simply use the Fields collection items in our expressions. However, if we have multiple data sets assigned to different report items, such as tables(ixes), we may find that we need to code the retrieval of our report attributes in many many stored procedures. To overcome this limitation, we can create a separate stored procedure, which returns the set of values stored in the database table. A simple example is code like:

     

    CREATE usp_CTL_Get_Report_Colours
    AS
    BEGIN
    SELECT HeaderColour
    , FooterColour
    , BodyTextColour
    FROM CTLReportColours
    END

     

    Data Set

     

    In Reporting Services we can use usp_CTL_Get_Report_Colours to construct a data set, which will have three fields named like the column names.

     

    Usage

     

    To use a separate data set means that we need to be able to use its fields in some tables which already have a data set assigned to them. Since we can have only one data set per report element, we need to create separate parameters for each of the fileds in the attributes data set we have created. In our case, we need to create three parameters: HeaderColour, FooterColour and BodyTextColour. All of these should be hidden and should be populated by our data set in the outermost report only – if we have subreports, they should get their values from the parent report, so we avoid multiple execution of the usp_CTL_Get_Report_Colours stored procedure.

     

    Conclusion

     

    The described method for handling report attributes can be preferable because:

    • We do not need to know anything about coding to create dynamic reports
    • Reports are dynamic with no need to re-deploy them
    • Report attributes are in a database table, which makes their management easy
    • Various management screens can be built on top of the database table in .NET with relative ease
    • Retrieving the values through a stored procedures allows us to pass back parameters, which then can drive what colours get displayed – making the retrieval flexible if we need to display different colours for different departments or people
    • Centralised and secured storage to all report attributes – environment settings, styles, etc.

     

    The drawback is one as I see it: having to create multiple report parameters in all our reports. In my opinion it is a much smaller issue than having to manually change report properties with every change in the requirements and overall it is well worth the effort.

    Improving performance and maintainability by keeping logic out of our reports

    As Reporting Services allows us to implement some quite complex logic, report authors are often tempted to build a lot of it in their reports. Having some simple calculations there does not generally cause harm, as simple + and – operations specific to a report should probably be built in it. However, I have seen aggregations, complex queries and code hacks, which would normally be performed fairly easily in Analysis Services or SQL Server built in Reporting Services.

    There are a few distinct benefits we can get from keeping logic out of our reports.

    • Firstly, by having our MDX scripts and SQL Queries (stored procedures) in the back-end allows us to use them in different reports with no need to maintain them by painfully synchronizing them with every change. Also, this allows business MDX-ignorant users to use complex measures, increasing the value of the BI solution for them.
    • Then, normally, our database server is usually maintained by people who have skills in SQL or MDX, and can analyse and optimise the scripts there instead of relying on report authors to do so. Even if we have BI professionals maintaining the whole solutions, it is far more convenient to have most of the logic stored on the same server. Whether it is a performance bottleneck, or a bug, it is far easier to fix it by tracing either cube calculations or stored procedures than logic stored in a report.
    • Sending a large amount of data between the back-end server and SSRS is increasing the stress on the network, which can be critical when multiple requests with millions of rows in results are passed to it.
    • Caching is important when different reports use the same stored procedures or MDX calculations.
    • And a last point: a database/OLAP server is just faster in calculations that SSRS.

    In general, using stored procedures as SQL Server data sources instead of building queries in SSRS; or alternatively, keeping MDX in calculations in the OLAP cubes is the first step towards avoiding excessive code accumulation in SSRS. Then, avoiding anything more complex than simple sums and groupings in the reports and moving them to the stored procedures or MDX calculations is another desirable way to structure our solutions. Finally, even simple things like Actual – Budget variance measure should be built in MDX or SQL rather than in the reports. Following these simple rules and keeping ourselves disciplined when building reporting solutions improves their performance, maintainability and even usability.

    Speed up MDX query preparation in Reporting Services

    We often need to build some advanced functionality into our reports and since the Reporting Services query designer does not provide a very advanced way to create MDX statements, our only option is to get our hands dirty and write some code ourselves. One of the things we must consider is the time that the query designer needs to validate our syntax, execute the query and determine what fields to be created based on the data set it receives from Analysis Services.

    When we specify a number of query parameters, we have the option to give them default values:

    image

    If we do not do this, the query will be executed with the default dimension member instead of the a specific value as a parameter, which can be slow and unnecessary in the usual case where the default member is the (All) member. Instead, we should always specify a value which is to be used for limiting the sample data set. Ideally, is should be small but not empty. If the parameter is too limiting and the MDX query does not return any values the data fields will not be populated.

    Specifying default values speeds up development time, especially when using complex cubes, where Reporting Services queries can be slow as its dimension cross joins can be very big.

    Chris Webb also blogged in regards to this issue. I strongly recommend you read his post.

    Reporting Designer in BIDS 2008 Column Visibility Bug

    A small issue with BIDS I just found out about – if you want to modify the visibility expression of a column in Business Intelligence Development Studio 2008 you may find that the Fields list is empty and that Report Designer cannot see a Dataset binding for the column. Regardless of that, if you type in a Fields!<field_name> expression and you get a red curvy line indicating something is wrong with it, you will be able to deploy the report and the visibility of the column will get toggled by the expression. So it actually works but also could be annoying.

    Note (2009-02-07): This appears if you have a tablix within a tablix and the outer tablix does not have a dataset associated with it.

    Refreshing report data through View Report button

    In Reporting Services we already have a refresh button:

    refresh-button

    which refreshes the displayed report data. Users, though, tend to click the View Report button instead and it is sometimes necessary to enforce a data refresh every time a user clicks on View Report.

    Normally, the View Report button should be used when a report user changes the report parameters and Reporting Services will try to retrieve data from the data store after clicking it only if some parameter value has changed.

    Using this knowledge, we can set up a dummy internal parameter which changes its value constantly. If we want to make sure the report value changes, we can build a datetime parameter with available and current value of Now() – which returns the current date and time and will always be different between clicks on View Report.

    A possible issue with this way of enforcing data refresh is that the report cache will never be used, thus possibly causing a performance problem. However, if performance is less desirable than avoiding the need to educate our users this report “hack” may be quite useful.

    Comments Collection – a simple implementation

    In order to have comments on a report we need to ensure we have a comments storage place (a database table) and an user interface (an asp page). I will skip the explanation of how to build and integrate the asp page in our reports because I have already discussed javascript in Advanced javascript in Reporting Services, while building an asp comments collection page is outside of the scope of this blog and is a fairly simple task for any .NET developer.

    Having provided the prerequisites, the way our report will behave is very simple – it will pass to the asp page the location of the node against which the comment is entered, which in turn will call a stored procedure which will update the comments table.

    The following graph illustrates this set-up:

    comments_graph

    Our comments table has to be built after considering the grain of the nodes against which we store comments. Our users may require the ability to comment against fact values on any level of our hierarchies – including aggregations. In order to allow for maximum flexibility we may need to have a table which mimics our fact table structure but instead of facts values stores comments.

    Also, we need to build two stored procedures – one to update our comments table and another one to retrieve the comments we want to display on the reports. These are similar in every way but the action they perform. Each of the stored procedures has to accept our dimension coordinates as parameters – most likely the dimension keys – in order to be able to correctly store or locate the table value which it needs to update or select.
    There are some other considerations we need to keep in mind:

    • Unfortunately, Reporting Services does not allow us to have report items which are dynamically updated, so it is not possible to have the comments to get displayed without refreshing the report data.
    • If two report users simultaneously try to comment on the same issue the .NET code must be able to resolve the contention issue caused by the “commentators”.
    • Data storage may be a problem as users are usually reluctant to set a limit on the length of the comments.

    Adding comments to our reports is simple and easy and should not cause too much headache if it is designed well. It is a very desired functionality on most reports as these are generally shared by many users, who can be responsible for different report items and need to be able to explain the report contents to their peers (especially when forecasting and planning is involved).

    Amendment 1 (2008-10-13): In a conversation with a colleague who has recently implemented a comments collection solution, I came across the idea of having the comments stored in a dimension comments table, then mapped to the already described fact table mock-up. The reason for separating the comments through a mapping table is the reduced storage in case the same lengthy comments are used against multiple dimension coordinates (in her project – rolling over comments periodically).