Enhanced Security and Integration of Microsoft BI Solutions with Kerberos

The following is an overview of an article prepared by Mark Dasco and myself. The full article is around 4800 words and is attached to the end of this post. Basically, the article is a quite thorough description of how to implement Kerberos and the benefits Kerberos provides when implementing Microsoft BI solutions.

Table of Contents

 1 Overview
2 The Double Hop
3 The NTLM Protocol
4 The Kerberos Protocol
5 Business Intelligence Case
6 Implementation
6.1 Considerations
6.2 Implementation
6.2.1 Getting Started
6.2.2 Configure Clients for Kerberos Authentication
6.2.3 Defining SPNs
6.2.4 Using Negotiation
6.2.5 Enable Impersonation and Delegation
6.3 Checking that it all works
7 Conclusion

 

Overview

When developing Microsoft Business Intelligence solutions we frequently need to rely on tight security integration between various tools. The NTLM protocol provides enough features for simple implementations, but when we need to provide enterprise-class solutions we invariably feel constrained by it. With Windows 2000 and later versions Microsoft provides an alternative security protocol – Kerberos, which addresses certain limitations of NTLM, provides improved security and better performance. The implementation of Kerberos could be fairly simple or very complex depending on the requirements. Configuring a few default server instances with no constraints on their services secured through Kerberos could be almost trivial, while for some more specific cases it could be a major cause of frustration.

As a base for this study we will examine a specific BI case – a digital dashboard, which involves all layers of the Microsoft BI stack:

 

  •  SQL Server 2005
  • SQL Server Analysis Services
  • SQL Server Reporting Services
  • PerformancePoint 2007 Monitoring and Analytics for building a dashboard
  • SharePoint Server as a organisational portal hosting the dashboard

Furthermore, each of the servers exists on two environments – Development and UAT. Also, we will show how we can implement Kerberos only between the services utilised by the servers, not affecting the rest of the domain and effectively isolating the implementation.

Typically, solutions not configured for Kerberos authentication and delegation fall back to the default NTLM authentication protocol. Whilst NTLM is completely transparent and very easy to use on a Windows domain, it falls short when we need to pass user credentials across a few server layers. This is commonly known as a double hop issue. If we depend solely on NTLM for user authentication, passing user names to servers on lower levels of our server topology involves including them in our connection strings or passing them programmatically, which is hardly the right choice for an enterprise-grade security framework.

On the other hand solutions which correctly implement Kerberos are advantaged with cross-server delegation and authentication, thus allowing the use of Integrated Windows Authentication throughout the whole solution. The ability to capture user credentials on any server is essential if we want to be able to secure and control access to each server independently and minimise the damage resulting from a potential security breach.

Download Full Article by Mark Dasco and Boyan Penev

Post-mortem thoughts on PerformancePoint Planning and the future of Microsoft BI planning and forecasting

As everyone must be well aware by now, Microsoft has discontinued PerformancePoint Planning and has decided to merge Monitoring & Analytics into SharePoint as PerofrmancePoint Services. Chris Webb and Nick Barclay already blogged about this and gave us some valuable thoughts/explanations on the subject.

In addition to what has already been said, I would like to add that maybe dumping Planning will not be such a great loss to anyone. The current market penetration is marginal and the successful implementations are not that many anyway. I have seen companies considering PP Planning and then abandoning it because of the complexities involved which translate directly into a high implementation cost comparable to a .NET implementation.

From a simplistic technological point of view, planning and forecasting is allowing users to input data, then manipulating the data according to some business rules, and then adding it to their BI systems in order to analyse and compare things from the past to things in the future. Currently, we can do this by either building a custom application, which handles all this, or we can use a third-party application handling it for us. I have had the chance to be involved in each scenario (once with a team of .NET developers and a few times with Calumo, which allows cube write-back or stored procedure write-back from Excel). The difficulties always come from the fact that the knowledge needed to accurately gather requirements, obscured by layers of business logic, and the completely different nature of planning and forecasting in comparison with analytics.

Analytics, or core BI is based on the presumption that we already have reliable data, of which our business clients want to make sense, thus gaining insight into their business. Planning and forecasting, in contrast, also involves allowing these same users to record their thoughts about the business and their projections about the future, and then analyse those just like their historical data. Therefore, planning and forecasting is more complex than pure analytics.

There is no tool in the Microsoft BI stack which can completely cover the requirements for a typical  business planning scenario. PerformancePoint Planning tried to encapsulate the planning logic into a set of models and rules, but it was too complex for both users and developers to understand, implement and then maintain. I have seen a number of successful planning and forecasting implementations with a third-party tool – Calumo. It is a fairly simple application (at least in comparison to PP Planning), which apart from some quite handy improvements over Excel for report authoring, has the very powerful functionality to allow users to input data straight back to their OLAP source (cube write-back), or to their data warehouse (stored procedure write-back). That is all that is needed for any planning and forecasting project and Micorosft should really have a look at what their partners are offering as a solution to planning instead of developing monstrosities like PerformancePoint Planning.

Building on top of SQL Server stored procedures and Analysis Services write-back, Microsoft can easily enhance their BI offering. All we need is a way to access this functionality through a front-end tool like Excel, SharePoint or Reporting Services.

Note: A couple of techniques which may be useful for planning and forecasting implementations are discussed in these posts:
Spreading Non-Transactional Data Over Time
Moving writeback data in the Fact tables and avoiding problems with changing column names