Fun with T-SQL

As a tribute to some fairly restrictive .NET design I once fell victim to, I wrote this piece of code:

Use [Database]
Select [Select]
From [From]
Where [Where] = ‘Where’
And [nvarchar(50)] = ‘nvarchar(50)’
And [int] = 1
The CREATE and INSERT statements for the above command:
 
CREATE TABLE [dbo].[From](
 [Id] [int] NOT NULL,
 [Where] [nvarchar](50) NOT NULL,
 [On] [nvarchar](50) NOT NULL,
 [Select] [nvarchar](50) NOT NULL,
 [From] [nvarchar](50) NOT NULL,
 [int] [int] NOT NULL,
 [nvarchar(50)] [nvarchar](50) NOT NULL
)

INSERT INTO [From] (  [Id]
, [Where]
, [On]
, [Select]
, [From]
, [int]
, [nvarchar(50)])
SELECT 1, ‘Where’, ‘On’, ‘:-)’, ‘From’, 1, ‘nvarchar(50)’

Enjoy!

  
 

Sponsored Post Learn from the experts: Create a successful blog with our brand new courseThe WordPress.com Blog

Are you new to blogging, and do you want step-by-step guidance on how to publish and grow your blog? Learn more about our new Blogging for Beginners course and get 50% off through December 10th.

WordPress.com is excited to announce our newest offering: a course just for beginning bloggers where you’ll learn everything you need to know about blogging from the most trusted experts in the industry. We have helped millions of blogs get up and running, we know what works, and we want you to to know everything we know. This course provides all the fundamental skills and inspiration you need to get your blog started, an interactive community forum, and content updated annually.

EXECUTE AS someone else – impersonation in stored procedures

Recently, I had to implement a stored procedure, which is executed under a service account context. It had to be invoked by an Informatica workflow and its purpose was to clean a cache database table and to update all statistics on a database through invoking the sp_updatestats. As the Informatica user did not have enough permissions to perform these tasks, I had to use the EXECUTE AS in the stored procedure. The EXECUTE AS statement can also effectively be used to limit the execution scope through allowing limited permissions to the user account executing the stored procedure:
CREATE PROCEDURE [dbo].[usp_Update_Statistics] 
WITH EXECUTE AS user
AS
BEGIN
 clear cache
 EXEC sp_UpdateStats
END
Immediately I encountered various problems with the execution of the stored procedure. After some research it turned out that a few preconditions need to be satisfied before we can execute it:
1. TRUSTWORTHY setting on the database needs to be turned on:
     ALTER DATABASE database SET TRUSTWORTHY ON;
2. The owner of both the context database and the master database must be the same. To set this property we need to:
   2.1 Find who is the owner of master:
           exec sp_helpdb
   2.2 Set the context database owner to the same owner:
           ALTER AUTHORIZATION ON DATABASE::database TO user
3. The account under which we EXECUTE AS needs to be a database/server principal. The database and server principals can be found in the sys.database_principals and the sys.server_principals tables.
4. The account needs to be granted impersonate permissions:
     GRANT IMPERSONATE ON USER::user in EXECUTE AS TO user executing the sproc
5. If a server login is specified (instead of a database user), it needs to be mapped to a database user.
Also, if SQL Server is running under a local account/service it is not possible to use the EXECUTE AS statement.
The same statement can be used to execute SQL Server Agent jobs under different context – something particularly useful when trying to run them through Integration Services.

Refreshing report data through View Report button

In Reporting Services we already have a refresh button:

refresh-button

which refreshes the displayed report data. Users, though, tend to click the View Report button instead and it is sometimes necessary to enforce a data refresh every time a user clicks on View Report.

Normally, the View Report button should be used when a report user changes the report parameters and Reporting Services will try to retrieve data from the data store after clicking it only if some parameter value has changed.

Using this knowledge, we can set up a dummy internal parameter which changes its value constantly. If we want to make sure the report value changes, we can build a datetime parameter with available and current value of Now() – which returns the current date and time and will always be different between clicks on View Report.

A possible issue with this way of enforcing data refresh is that the report cache will never be used, thus possibly causing a performance problem. However, if performance is less desirable than avoiding the need to educate our users this report “hack” may be quite useful.

Comments Collection – a simple implementation

In order to have comments on a report we need to ensure we have a comments storage place (a database table) and an user interface (an asp page). I will skip the explanation of how to build and integrate the asp page in our reports because I have already discussed javascript in Advanced javascript in Reporting Services, while building an asp comments collection page is outside of the scope of this blog and is a fairly simple task for any .NET developer.

Having provided the prerequisites, the way our report will behave is very simple – it will pass to the asp page the location of the node against which the comment is entered, which in turn will call a stored procedure which will update the comments table.

The following graph illustrates this set-up:

comments_graph

Our comments table has to be built after considering the grain of the nodes against which we store comments. Our users may require the ability to comment against fact values on any level of our hierarchies – including aggregations. In order to allow for maximum flexibility we may need to have a table which mimics our fact table structure but instead of facts values stores comments.

Also, we need to build two stored procedures – one to update our comments table and another one to retrieve the comments we want to display on the reports. These are similar in every way but the action they perform. Each of the stored procedures has to accept our dimension coordinates as parameters – most likely the dimension keys – in order to be able to correctly store or locate the table value which it needs to update or select.
There are some other considerations we need to keep in mind:

  • Unfortunately, Reporting Services does not allow us to have report items which are dynamically updated, so it is not possible to have the comments to get displayed without refreshing the report data.
  • If two report users simultaneously try to comment on the same issue the .NET code must be able to resolve the contention issue caused by the “commentators”.
  • Data storage may be a problem as users are usually reluctant to set a limit on the length of the comments.

Adding comments to our reports is simple and easy and should not cause too much headache if it is designed well. It is a very desired functionality on most reports as these are generally shared by many users, who can be responsible for different report items and need to be able to explain the report contents to their peers (especially when forecasting and planning is involved).

Amendment 1 (2008-10-13): In a conversation with a colleague who has recently implemented a comments collection solution, I came across the idea of having the comments stored in a dimension comments table, then mapped to the already described fact table mock-up. The reason for separating the comments through a mapping table is the reduced storage in case the same lengthy comments are used against multiple dimension coordinates (in her project – rolling over comments periodically).

Colourful reports with no pictures – use of alternative fonts

When constructing a Reporting Services report we have the option of using pictures to illustrate a concept – and Edit button can look like a pencil, traffic lights usually are presented with small icons, a print button can show a small printer. While in PerformancePoint we must use small pictures to show these, in Reporting Services we can also use alternative fonts in the Microsoft fonts library. Standard Microsoft options are the Wingdings and Webdings font families. Using them we can construct horizontal bar graphs, traffic lights and various other icons. The positives with doing that are: seamless exporting to PDF and Excel, easy change of an icon throughout all reports – both of its appearance and colour and easy synchronisation of colours between various report icons.

Examples of report items using Wingdings are:

Horizontal Bar Graph:

bar_graph_wingdings

Traffic Lights:

tlights_wingdings

tlights_2_wingdings

Combination of icons, traffic lights and trend arrows:

combined_indicators

All of the examples above use Wingdings and Wingdings 2. The definition of the characters, fonts and their colours are stored in control tables, thus they are easily modified without altering the reports.

The first example shows a dynamically generated bar graph where the length of the character string corresponds to the numbers displayed to its left. The largest number fills up the entire table cell, and as it gets smaller, the number of Webdings characters proportionally reduces.

The second and the third reports utilise Wingdings and Wingdings 2 to display colour-blind friendly traffic lights.

The third report fragment illustrates a combination of traffic lights, risk trend arrows and edit characters (pencils icons), all of which use conformed colours, easily synchronised and easily interchangeable. The colour of the upwards trend arrow, in example, is exactly the same as the one used for the red traffic light.

There are a few considerations when using these fonts.

Firstly, only Wingdings and Webdings come with a standard Windows installation, while Wingdings 2 and Wingdings 3 are packaged with Microsoft Office. This is important as the report users need to have all report fonts installed on their computers so that their browser can render them. Also, if we want to allow for printing and exporting the reports to PDF or Excel the fonts must be installed on the Reporting Services server instance as well. Provided we can guarantee this set-up all of the described functionality is seamless for the report users.

Using blank ASCII character in PerformancePoint to avoid trimming

Recently I had a problem with a PerformancePoint filter, which got automatically trimmed. The filter was displaying a small custom cut of a hierarchy and the Business Analyst on the project wanted to see it always expanded in a simple single select drop-down instead of in a tree. The tree view was undesirable because the hierarchy was very small and clicking twice to get to the third level was too much for the users of the reports.

Unfortunately, indenting the drop-downs with blank spaces is not possible when using a List filter as PerformancePoint trims the items in it providing a flat list of names. Indenting with dashes or some other visible character is also usually not visually pleasing, and since there is no way to use rich text and color some of the characters in white, the only option is to use an invisible character which does not get trimmed.
Fortunately, there is such a character with an ASCII code of 255. It is a space-like character, which does not get trimmed by PerformancePoint in a List filter. Therefore, if we indent our list with ASCII-255 characters we achieve the goal of having nicely arranged list in a simple filter. Keeping Alt pressed and then typing 255 from the numeric keypad results in typing one such whitespace character.
As it is generally not a good practise to use the character in our code as distinguishing it from a space is not easy, it is advisable to provide a lot of comments around code fragments utilising it, including the way to type it.

Interactive reports – passing content back to the database

A Reporting Services report generally exposes content to end-users but some interactivity can be provided by embedding asp pages to write to the reporting back-end. While for complex user interaction where user input is required (e.g. comments) asp pages is the only way to go, there is another area of simple functionality (in example):

  • saving default values for selected parameters
  • locking of functionality by a power user
  • deleting data from a fact table

or in general – if users do not need to enter any input, but rather trigger some action in the back-end, we can use stored procedures without writing any .NET code.

As an example, we will save a selected Business Unit from a parameter drop-down box to a database table which is used to derive the default value for that parameter. As different users may be interested in different Business Units, we will store not only the default parameter value but the user name as well.

Firstly, let’s assume that we have already created a report called “Balance Sheet”, which has two parameters – Business_Unit and Period. To keep the case simple, I will assume that the parameter is driven by a stored procedure in a relational back-end.

We first need a place to store the default parameter values – we can simply create a Ctl_User Preference table:

CREATE TABLE Ctl_User_Preferences (
User_Id nvarchar(50),
Default_Business_Unit nvarchar(50)
)

The default parameter value is retrieved from that table by a stored procedure which takes the a User Id as a parameter and returns the Business Unit. Then it is used in the Default Value for the report parameter in Reporting Services.

Now we can investigate the process of populating the table. We place a textbox showing “Save Preferences” text on our Balance Sheet report which links to another report, confirming the user action and writing the selected value of the Business_Unit parameter to the Ctl_User_Preferences table.

We have to create the “confirmation” report first (e.g. “User Preferences Confirmation”). It has to have two parameters – Business_Unit and another, hidden, dummy parameter which we can call Save_Parameter. The Save_Parameter parameter must be able to accept Null values. Then, we create a stored procedure which accepts @User_Name and @Business_Unit parameters and updates or inserts the two values in the Ctl_User_Preferences table. Next we link @Business_Unit to the Business_Unit report parameter and the @User_Name parameter to User!UserID.Value. We link the stored procedure to the hidden parameter by using it for getting its default value (Null as it does not return anything). After completing these steps we end up with a report which “writes” a Business_Unit in the Ctl_User_Preferences table when opened.

Next, we have to link this report to our main Balance Sheet report. I have described how we can link through the use of javascript to a pop-up report in the Advanced javascript in Reporting Services article. Using this technique we can create a textbox in the Balance Sheet report which spawns a pop-up which contains the User Preferences Confirmation report and also passes to it the Business_Unit parameter.

To summarise the events flow, when a user clicks on the textbox a new pop-up window containing the confirmation report opens and while rendering it, Reporting Services calls the stored procedure which saves the Business Unit parameter to the Ctl_User_Parameters table. Next time the user opens the Balance Sheet report, the Business Unit parameter will default to the saved value as its default value is retrieved from the Ctl table.

We can also use this technique to lock/unlock certain functionality, such as changing the reporting period, or even deleting data in our fact tables by placing links in a table or a matrix next to measures and passing their coordinates to the “confirmation” report.

This out of the box functionality in Reporting services can be quite powerful for all sorts of quick tweaks to our reports with comparatively small development effort, thus being quite worthwhile implementing.

Filtering Unneeded Dimension Members in PerformancePoint Filters

Published on SQLServerCentral on the 2008-11-14:
http://www.sqlservercentral.com/articles/PerformancePoint/64565/
 

Sometimes we need to utilise dynamic dimension security in an Analysis Services solution and we also need to display dynamically the allowed members in a PerformancePoint filter. 

In case our hierarchy is multi-level and in case we can expect to have security on multiple levels, PerformancePoint will display the full path upwards to the root member of the dimension. So, in the case where in a Business Unit hierarchy we have access to some third level members, in PerformancePoint we will see all their parents. In example if our hierarchy looks like this: 

All Business Units
-Europe
–UK
—-France
—-Bulgaria
-North America
—-USA
—-Canada 

and we give someone access to France, they will in fact see: 

All Business Units
-Europe
—-France 

Indeed, when they select All Business Units or Europe they will still see only France data but this may be confusing. To eliminate the top levels we need to change the way we create our PerformancePoint filter. 

To achieve this, first we need to create a MDX Query filter. For more information about PerformancePoint filters and dynamic dimension security you can read the following brilliant post on Nick Barclay’s blog: PPS Data Connection Security with CustomData. Nick explains how to set up PerformancePoint to work with Analysis Services dynamic dimension security and related topics. I will now concentrate on actually filtering the members of the already set-up MDX Query filter. 

Instead of requesting all dimension members with a simple statement like: 

DESCENDANTS([Business].[Business Hierarchy].Members,, SELF_AND_AFTER) 

we can write some fairly simple MDX which means: 

Get me the descendants of all dimension members whose ascendants (excluding themselves) have no more than one child. 

And the MDX code is: 

 DESCENDANTS( FILTER([Business].[Business Hierarchy].Members AS a,   ((FILTER(ASCENDANTS(a.CurrentMember) AS a_asc,     a_asc.CurrentMember.CHILDREN.Count > 1).Count = 1)     And     a.CurrentMember.Children.Count > 1)   Or   ((FILTER(ASCENDANTS(a.CurrentMember) AS a_asc,     a_asc.CurrentMember.CHILDREN.Count > 1).Count = 0)     And     a.CurrentMember.Children.Count = 0)),,SELF_AND_AFTER) 

The result will be France being displayed as the only available member in the PerformancePoint drop-down. Also, if we had two different allowed members, the code above would show us the top common parent of the two. Therefore, if we had France and Canada as the allowed set of dimension members, the drop-down would consist of the following hierarchy: 

All Business Units
-Europe
—-France
-North America
—-Canada 

thus satisfying our requirements.

Excel and MDX – Report Authoring Tips

Excel is a powerful BI tool which is often overlooked as an inferior alternative to Reporting Services for distributing reports because of its potential for causing chaotic mess in organisations. As a number of speakers on the Australian Tech.Ed 2008 pointed out, replacing Excel is often the goal of many organisations when implementing BI solutions and using Excel as a main reporting tool can be often misunderstood by prospective clients. SharePoint Excel services go a long way in helping organisations control the distribution of Excel files and these in combination with a centralised Data Warehouse and Analysis Services cubes on top of it will no doubt be a competitive solution framework, especially when considering the familiarity of business users with the Microsoft Office suite.

During a recent implementation of a BI solution I was approached with the request to provide certain reports to a small set of business users. As the project budget did not allow for a full Reporting Services solution to be built for them, Excel was appointed as the end-user interface to the OLAP cube. The users were quite impressed by the opportunities that direct OLAP access present to them but were quite unimpressed by the performance they were getting when trying to create complex reports.

After investigating the problem I noticed a pattern – the more dimensions they put on a single axis the slower the reports were generated. Profiling the SSAS server showed that Excel generates quite bad MDX queries. In the case where we put one measure on rows and multiple dimensions on columns the Excel-generated MDX query looks like this:

SELECT {[Measures].[Amount]} ON ROWS,
{NON_EMPTY(

[Time].[Month].ALLMEMBERS*

[Business].[Business Name].ALLMEMBERS*

[Product].[Product Name].ALLMEMBERS)} ON ROWS

FROM OLAP_Cube

What this means is that Excel performs a full cross-join of all dimension members and then applies the NON_EMPTY function on top of this set. If our dimensions are large, this could cause significant issues with the performance of the reports.

Unfortunately it is not possible to replace the query in Excel as it is not exposed to us, and even if we could replace it, it would be pointless as changing the user selections of the dimensions to be displayed would cause it to fail. There are some Excel add-ons  available for changing the query string but issues such as distribution of these add-ons and the inability of business users to edit MDX queries diminish the benefits of using them.

While waiting for an optimised query generator in Excel, we can advise business users of ways to optimise their reports themselves. In my case these were:

  1. Consider using report parameters instead of placing all dimensions on rows or columns in a pivot table. This will place them in the WHERE clause of the query instead of the SELECT clause and will not burden the cross-join part of it.
  2. Spread the dimensions as evenly as possible between rows and columns. Having 6 dimensions on one row is worse than having 3 on rows and 3 on columns as the cross-joins will generate smaller sets for NON_EMPTY to filter and ultimately will improve performance.
  3. Consider using less dimensions – if possible split the reports on multiple sheets. This is not always possible but it is better for the users to keep their report-making simple.

In Reporting Services we can write our own query which can be optimised by “cleaning” the cross-join set of empty members in the following manner:

NONEMPTY(NONEMPTY(DimA * DimB) * DimC))

As we cannot do this in Excel, the best way to improve performance is to advise the users against overcomplicating their reports.

Advanced javascript in Reporting Services

Often our requirements demand from us to use javascript in reporting services to navigate between reports. There are various reasons for using javascript, among others:

  • Pop-ups – when we need to spawn another window for simultaneous viewing of lower level reports, or for displaying a full list of items (such as lists), which could be limited on our main report
  • Navigating from a PerformancePoint dashboard Reporting Services report to a purely Reporting services report and escaping from the PerformancePoint layer
  • Opening an custom html page (with or without passing parameters to it), which can then provide some additional functionality to our report (e.g. comments, editing fields…)
  • Providing a printable pop-up version of our report

To achieve this functionality we need to use the Navigate to URL option in our report Navigation properties and we need to provide a javascript link to our target report.

Through using javascript we can pass parameters to the targeted URL and we can also choose how to display the targeted page. In example, to open a simple html page the javascript code we need to use in our Navigate to URL expression is:
 
=”javascript:void(window.open(‘http://myserver/mypath/myasppage.html’, ‘_blank’))”
 
In this simple example we provide the javascript expression as a simple string without passing any report parameters or fields as page parameters. The result of clicking on our report link will be opening a new pop-up window, which will then display the html page at the specified address.
 
First, lets explore the target javascript parameter. _blank means that every time we click on the link, we will spawn a new pop-up window. If we supply _parent as a value for the target parameter, javascript will pass the URL to the current window and will render the html page in the already open browser window, thus allowing us to replace the content of the current tab/window with new content. Another option is to provide a custom javascript property:
 
=”javascript:void(window.open(‘http://myserver/mypath/myasppage.html’, subreport))”
 
which will result in opening a pop-up which we can reuse afterwards from any javascript link. If we have more than one javascript link on our report and we want to open some or all of those in the same pop-up window, we can name our target windows and then use the name in multiple javascript links, which then render their target pages in the same pop-up.
Furthermore, we can specify other javascript properties. An example of using some of these is:
 
=”javascript:void(window.open(‘http://myserver/mypath/myasppage.html’, ‘_blank’, width=300,height=300,top=300,left=500,toolbar=1,menubar=1,location=1,status=1,scrollbars=1,resizable=1′))”
 
For us as Reporting Services developers the more important ones are the width, height, toolbar, menubar and resizable (not resizeable). The toolbar and menubar have slightly different meanings in different browsers. In IE 6 and IE 7 enabling and disabling (setting them to 0 or 1) these result in different browser elements to get displayed, so testing them should be done in all the browsers our users could use. The resizable property allows/disallows resizing (including maximising) the pop-up window.
 
To form the URL we can either point to Report Manager or to the Report Server directly. Pointing to Report Manager will display the report with the report manager wrapper, which is in most cases not required. Supplying the Report Server URL results in rendering only the actual report in the javascript window. To form the ReportServer URL we need to supply the full path to it and the report. An example of a complete URL is:
 
 
or as a part of a javascript pop-up expression in Reporting Services:
 
=”javascript:void(window.open(http://myserver/ReportServer?/myfolder/myreport’,’_blank’))”
 
It is always a good idea to parametrise the URL and to supply the server and the path as a Reporting Services field or parameter to the expression:
 
=”javascript:void(window.open(‘” & Parameters!ServerAddress.Value & “?” & Parameters!ReportPath.Value & “myreport,’_blank’))”
 
which allows us to maintain the report in our relational back-end easing the deployment of our solution to different environment without changing all our reports.
 
We can also specify Report Server commands in the URL:
 
=”javascript:void(window.open(‘” & Parameters!ServerAddress.Value & “?” & Parameters!ReportPath.Value & “myreport&rs:Command=render&rc:Parameters=true‘,’_blank’))”
 
 
These two commands make Report Server render the report without evaluating the item first (improving performance), and also show the report parameters to the user. There is a thorough article on MSDN showing the various commands that can be passed to the reportserver and I will not go any further in explaining these in this post.
 
We can also pass report parameters to our target reports:
 
=”javascript:void(window.open(‘” & Parameters!ServerAddress.Value & “?” & Parameters!ReportPath.Value & “myreport&rs:Command=render&rc:Parameters=true&Period=” & Parameters!Period.Value & “‘,’_blank’))”
 
This expression will then pass our Period parameter to the subreport through the URL. We can specify multiple parameters like this:
 
=”javascript:void(window.open(‘” & Parameters!ServerAddress.Value & “?” & Parameters!ReportPath.Value & “myreport&rs:Command=render&rc:Parameters=true&Period=” & Parameters!Period.Value & “&Business_Unit=” & Parameters!Business_Unit.Value & “‘,’_blank’))”
 
concatenating them with the & character. Please note that there is a 255 character restriction on the length of the URL, so multi-value parameters can be problematic to pass if the set of values is too large.
Another point to note here is that we should not pass the UserId of our users as a parameter. Instead of doing this we should always capture the UserId of the person viewing the report with User!UserID.Value in each report, which minimises the security risk of someone gaining access to sensitive report data through passing an edited URL to the browser with someone else’s user id in the parameters section.
As a last section I would like to address the problem with passing a MDX expression as a parameter value. In case we have an MDX value such as:
 
[Business Unit].[Business Unit Name].&[Australia]
 
the & character will be interpreted as a concatenation between parameters and the URL will be parsed with this presumption. To avoid this we can use the escape() function in javascript to substitute the ampersand character with its URI value of %26:
 
=”javascript:void(window.open(‘” & Parameters!ServerAddress.Value & “?” & Parameters!ReportPath.Value & “myreport&rs:Command=render&rc:Parameters=true&Period=” & Parameters!Period.Value & “&Business_Unit=’ + escape(‘” & Parameters!Business_Unit.Value & “‘),’_blank’))”
 
This way we can pass any MDX or any other special or reserved character through the URL.
In conclusion, javascript in Reporting Services widens our set of tools to tackle difficult requirements in a relatively simple way. The combination of the powerful javascript language and its widespread support and the options that Reporting Services provides for report rendering control through the URL is very powerful and report developers can benefit through exploring both areas.