Is 0.5 = 0.5 in SSIS?

Today I had an issue which was causing some peculiar results in my ETL. After approximately 1 hour of trying to find the exact problem, I managed to narrow it down to a Sort task. The Data Viewer which I put immediately before that task showed me:

ColumnA              ColumnB
BBHI                     0.5
LLHR                     0.5

The Sort did:

  1. Sort by ColumnB in Descending order
  2. Sort by ColumnA in Ascending order

I was surprised to see that the output was:

ColumnA              ColumnB
LLHR                     0.5
BBHI                     0.5

After trying to reverse the sort order on the two columns (first ColumnA and then ColumnB), I noticed that ColumnB changes the order of the two rows. The type of that column was Double Precision Float in SSIS and float in SQL Server. Since SQL Server also showed 0.5 for each column, I did not expect the reason for my wrong results to be the floating point number. And I was wrong. Clearly the 0.5 was not exactly 0.5 in one of the two columns. Luckily, I got advised by my client that if I multiply the number by 10,000,000,000 and then round to whole number, I would not have issues with losing any information. My fix was simple – a derived column – ColumnB_Int which did exactly that:

 (DT_I8)Round((ColumnB*10000000000.0),0).

Then I used that column in the sort instead of the original one. This did the trick.

Quick Wins and Quick Losses

I have been wondering of late – what a “Quick Win” implies and means. From my experience with BI projects, very often because of fairly uneducated target consumers and fierce competition, companies deliver quick and dirty solutions, hoping to attract attention and then sell more services. This practice is often referred to as a “Quick Win”. Of course, the actual intention is not bad, but when poorly executed it firstly wastes clients’ money and time and then also discourages them from pursuing a BI solution any further. In the case of a failure another term – a “Quick Loss” is more appropriate but never used.

 So, what determines the outcome?

 1. Scope

 Managing the scope is absolutely essential in a Quick Win scenario. We must convince the client that all the advanced functionality can be safely pushed back to the next full-blown release when we would have the time and money to build it properly. If we extend our Quick Win to build Dynamic Dimension Security, partition our cube, clean up the data, build dimension managing capabilities (MDS comes to mind), etc. we will most likely fail or at least jeopardise our chances of success. In this first crucial phase we need to concentrate on the core – building simple and robust system. Instead of having the usual scope creep, we should actually try to push for the opposite – scope cuts. Of course, this has to be carefully balanced with the actual need as cutting too much will leave us with an unusable result.

 2. Quality

 In my opinion, if we deliver a poor quality solution it will fail and no attempts to resuscitate it later would have any decent chance of success. So, when we are scoping out our project we must make sure we have time to build it well. Shortcuts would quite likely make us scrap it altogether at a later point of time and then rebuild it properly. Also, if we build an OLAP solution which is slow and buggy, we would hardly be able to convince our client that the next phase of the project will be any better.

 3. Analysis and Design

 Yes, it is a Quick Win and yes, it is a BI solution, but even these (contrary to some opinions) do need analysis and design. Spending a bit of time with the business users, the source system and with the server engineers can greatly improve the development experience. Without a design phase, it is hard to maintain a strict scope and attain high quality. A brief design document helps with remembering why we have done something the way we have and decoupling us (as developers) from the solution.

 4. Task Management

 I am not a project manager. However, when alone on a small project I find it very useful to track my progress and objectives by building a basic spreadsheet showing Tasks, Description, Time Allocated, etc. This way I can easily comprehend and explain how my development is going, and ask for more time before I hit a deadline if required. Also, a task sheet helps me to switch between tasks, or allocate them to other developers.

 5. Managing Client Expectations

 I have heard this phrase many times before, and it has usually been misused. Managing client expectations does not actually mean lying to the clients, neither it means promising too much. In my opinion, managing client expectations means exactly what it sounds like – don’t make your client too excited with what you cannot deliver and make them expect exactly what you can. It is good to keep the clients happy and optimistic for the future, but making them enthusiastic and then crushing their enthusiasm with a dud solution is unprofessional.

 This issue has been haunting me for a while. I have definitely not exhausted the topic and I am sure that many developers can add to this list their own thoughts, but I just hope I can spare some trouble or offer some hints for the less experienced readers of this blog.

Passing unCONSTRAINED Set and Member parameters between reports in Reporting Services

By default SSRS MDX queries get a StrToMember or StrToSet functions with a CONSTRAINED flag. However, many developers do not quite know why it is there or what it actually does. Books-On-Line contains this statements:

StrToMember

  • When the CONSTRAINED flag is used, the member name must be directly resolvable to a qualified or unqualified member name. This flag is used to reduce the risk of injection attacks via the specified string. If a string is provided that is not directly resolvable to a qualified or unqualified member name, the following error appears: “The restrictions imposed by the CONSTRAINED flag in the STRTOMEMBER function were violated.”
  • When the CONSTRAINED flag is not used, the specified member can resolve either directly to a member name or can resolve to an MDX expression that resolves to a name.
  • StrToSet

  • When the CONSTRAINED flag is used, the set specification must contain qualified or unqualified member names or a set of tuples containing qualified or unqualified member names enclosed by braces {}. This flag is used to reduce the risk of injection attacks via the specified string. If a string is provided that is not directly resolvable to qualified or unqualified member names, the following error appears: “The restrictions imposed by the CONSTRAINED flag in the STRTOSET function were violated.”
  • When the CONSTRAINED flag is not used, the specified set specification can resolve to a valid Multidimensional Expressions (MDX) expression that returns a set.
  • Therefore, if you have a CONSTRAINED flag you have to specify exact members or sets (e.g. [Date].[Year].[2009], or {[Date].[Year].[2009],[Date].[Year].[2010]}). If you omit the flag, you can pass to the StrToMember an expression, which evaluates to a member (e.g. [Date].[Year].[Year].Members.Item(0)), and to StrToSet an expression, which evaluates to a set (e.g. NONEMPTY([Date].[Year].[Year].Members, [Measures].[Amount]).

    The flexibility which removing CONSTRAINED offers can be quite powerful when passing parameters between reports. In example, we may want to pass a parameter to a drill-through report from two different summary reports, where each of those uses a different subset of dimension members, which in turn can be derived by different set expressions.

    Major drawbacks of using this approach is the severe performance hit it leads to, as well as a possible “MDX injection” vulnerability. Since in most cases we would  be using the passed parameters in a subcube expression or on the slicer axis (WHERE clause), this should not lead to as bad performance as we would get if we would use it inside a calculation. However, when we need to use a parameter directly in a calculated measure, we would be better off avoiding an unCONSTRAINED function. 

     Therefore, we may instead use SetToStr on the summary reports and pass a String parameter to a CONSTRAINED StrToSet function in the drill-through report. This way we are first resolving the set expression once and passing it on to the underlying report as a string. We could do that in a calculated measure returning a String, then passed on as a Field to the drill-through parameter. However, in the rare case where we have many rows travelling from the SSAS server to the SSRS server, this could be slow.

    So, whether we use a StrToSet without CONSTRAINED flag, or a String parameter constructed by a SetToStr function, is dependent on the actual scenario, but it is good to have both options in our arsenal of tools and techniques when we need to implement some non-quite-standard piece of functionality.

    Custom Dates for an SSIS SCD Task

    Just last weekend I implemented a number of Slowly Changing Dimensions in a SQL Server 2005 based project. For the large ones I wrote some SQL code, but for the smaller dimensions, I just decided to use the SSIS SCD task. Since the wizard does most of the work, there is not much beyond it I have done in the past with that component. This time, though we decided to have custom default EffectiveTo dates for the dimensions – 9999-12-31 instead of the default for the SCD task NULL. The wizard, however is not customisable and some manual teaks need to be done before it can handle custom dates. So, I decided to share these since there is not much around on this topic (or at least I could not find any particular references). There is a customisable component on Codeplex – Kimball Method SCD Component, however I could not use it as no custom tools could be used for an unknown reason.

    I created a quick mock up of a dimension table for demonstration purposes:

     

    Then, I created a SCD task in SSIS with one historical and one changing attribute. For Start and End dates I used my EffectiveFromDateId and EffectiveToDateId and got them populated with [System::StartTime]. Unfortunately, the SSIS task does not allow specifying custom values for the default To date, and uses NULL by default. To change it, we have to modify the following three dataflow components:

    We can modify these through the Advanced Editor (right-click). For the actual Slowly Changing Dimension task we have make the following change:

    Then we also have to modify the SQL script  for the two OLE DB commands (again through the advanced editor). For the Changing Attributes Updates Output:

    And a similar change to the Historical Attributes Inserts Output:

    After applying these three changes, we are ready to run the task:

    As we have three new rows, they get inserted in the target dimension table. As they are all active, their EffectiveToDates are the default values of 9999-12-31:

    Of course, if we decide to change anything through the SSIS SCD wizard, all of these will be lost and we have to redo these changes once again…