Source Control for Database Development with SSDT

I had never had the change of using source control for database development before I started working on my current project. I have used TFS and Subversion for SSAS, SSRS and SSIS projects in the past. Since it is very difficult to edit manually the rdl, dtsx and SSAS-related files, TFS/Subversion took the role of a glorified storage location with some version control capabilities.

With SSDT we can now enjoy the full set of capabilities of a source control system. This is by far the most compelling reason for me to use SSDT. Sure, there are other very very nice features which make SSDT way better IDE than the plain old Sql Server Management Studio so many SQL Server developers use: top 3 would be dependency/reference checking, better IntelliSense and schema compare (which I used to do with RedGate’s SQL Schema Compare in the past). But, source control is not just about convenience – it makes you and your team more productive and reduces stress levels (yeah, it does!). T-SQL is much like application code – it’s easy to modify by a developer, it’s readable and merge-able. So, it is a prime candidate for being managed using a source control system.

SSDT is free. There are absolutely no drawbacks to using it in a team db development environment and I would strongly encourage anyone who has not tried it to do so. The initial investment in learning how to use it will pay off handsomely in the longer term. Things like _old suffixes to tables and stored procedures become unnecessary, we don’t have to worry about deleting a piece of code and then losing it forever, daily backups for the purpose of not losing code, and in fact – we stop worrying about losing code in general. It will be there forever and we can track its progress (we can also see who breaks it). I’m sure most db developers would be well aware of the benefits of source control, but looking back, my projects always went ahead without using SSDT because the perceived complexity of using SSDT outweighed the potential benefits.

It is not hard. All you do is create a db project and start adding code. That’s in the form of .sql CREATE scripts. And that’s about it. Your project is configured to deploy to a SQL Server instance. When you build the solution, the output is a dacpac. When you deploy it to an instance the dacpac gets compared to what is already there and an incremental update is applied to the target environment. If you add a non-nullable column to a table which already has some data in it you will get a deployment failure. But try doing it on a live db – you’ll get the same issue, right? And since you are in SSDT if you change the name of a table you’ll get errors on all objects referencing it (like stored procedures) and your solution won’t build. Therefore, using SSDT prevents you from breaking your solution’s consistency. You can also refactor code – a right click on a function/stored proc reference allows you to change its name across the whole solution. All this plus source control. Still not convinced? Well, you may need to go through a few more db development projects (especially in a team environment), get pissed off with your teammates, lose some code, spend a few hours chasing code references to realise how important and empowering SSDT can be.

If your whole team is not on the same page, you can use SSDT solo. Import the db in SSDT and off you go – now you can tell everyone how they break your project all the time with changes which are not properly propagated across the solution. You can also import all new changes applied to the live db through the handy schema compare (SQL toolbar item). Just compare the live db to your project and it will show you all new objects, as well as all changes to existing objects in your solution. Applying these will effectively synchronise your solution with the new stuff other developers apply to the live db.

But don’t just take my word for it – download SSDT for free from: http://msdn.microsoft.com/en-gb/data/hh297027 and give it a good go before you decide that you are too old for it!

Oh, and if you do think it’s too much to handle, or if you would like to get your SSDT techniques honed a little and you live in London I’d recommend either coming to work on my current project, or letting you boss know of how great SSDT is, and how he/she can empower your team and splash a little on:

http://www.technitrain.com/coursedetail.php?c=28&trackingcode=CWB

Number of Weekdays Between Two Dates

There was an old post here describing some T-SQL code for finding the number of weekdays between two dates, which I wrote. It was working fine, so if you have implemented it you have not done anything wrong. However, Jeff Moden from SQL Server Central has written a post a while ago about this same problem and his implementation is a bit cleaner, and thus I would consider it better than mine. So, here is the link:

http://www.sqlservercentral.com/articles/Advanced+Querying/calculatingworkdays/1660/

Using the DSV to its Full Potential

The Data Source View in Analysis Services is a very powerful abstraction of the data source and it can help us overcome some scenarios in an easy and clean way. Many times we look for MDX or programmatic solutions to problems, which can be tackled best in our data. While for complex tasks we would be better off extending the ETL process, some simple ones can and should be implemented in the DSV.

 

As an introduction to the topic I would like to explain briefly what the DSV actually is. It can be conceptualised as a database view on top of the data source. By default all tables which we need for building the Analysis Services database (typically dimensions and facts) are appearing in the DSV as table bindings (exactly as if we do a SELECT * FROM Table). If we have no foreign keys defined in our database, SSAS will not show us the relationships in the DSV. However, we can define logical relationships in the DSV, thus connecting the tables on related columns, which are then used for automatically determining dimension relationships to the measure groups.

 

There are two important ways to modify the DSV, which allow us to add more columns to the existing tables and to modify the way the existing columns are shown:

Named Queries

 

 

 

If we right-click on a table in the DSV, we can select to replace the table with a Named Query. A Named Query is essentially a T-SQL statement, which is equivalent to a database view definition. By utilising Named Queries we can alter the way we see the tables and their column in SSAS. In example, we could concatenate columns, implement CASE logic, etc. Named Queries can be thought of as equivalent to database views.

 

Named Calculations

 

A named calculation is a SQL statement which adds a column to a table without modifying the table binding. It gives us an easy way to define a new column without changing the whole query. The statement defining the column is in T-SQL and it behaves the same way as a new column in a Named Query (or a SELECT statement). If we just want to add one more column (e.g. Display Order, Code+Description concatenation, etc.), we can simply define a Named Calculation. Also, as the name suggests, Named Calculations can be commonly used for defining a leaf-level calculation without modifying a large fact table’s SELECT statement in a Named Query.

 

The column we define here appears in both the DSV table and in the Dimension Designer window:

 

These two DSV functions can be used in many scenarios. Most importantly, there are a few when they yield better performance, faster development and easier maintenance:

Leaf-level calculations

If we have the common requirement to perform leaf-level calculations and then aggregate this up the hierarchy, as opposed to aggregating and then calculating, the best way to do this is in a SQL statement on the fact table. Alternatively, we can do this in and MDX statement:

 SUM(DESCENDANTS(Dim.CurrentMember,,LEAVES), MeasureCalc)

However, it comes at a price. Since SSAS would have to do the calculation for each leaf and then sum this up the hierarchy, this could take a long time to perform. Also, SSAS would not be able to use pre-processed aggregations and the calculations will be done at execution time. To avoid this we could add a new column to the fact table and do the calculation there (in SQL), using the column as a new measure in the cube, which can then be aggregated by SSAS as any other measure. The performance gain is usually substantial and using a Named Query or a Named Calculation should always be the preferred option.

Description Attributes

Often we need to perform a concatenation between different dimension attributes, which we can use as a Description attribute while slicing the cube, or when providing reports from the SSAS database. A very easy way to achieve such a requirement is to use our DSV and concatenate the column we need in a new column in the dimension table, which we can expose as a new attribute in the dimension. A task such as concatenating an Account Code and Account Description into an Account Long Description (i.e. [Account Code] + ‘-‘ + [Account Description]) becomes very easy to implement within the DSV without modifying the ETL or any tables.

Composite Keys

Sometimes we need to build unique keys for attribute column in a dimension. A good example is a Date dimension, which does not have unique keys for non-leaf levels such as Month. Often developers have Month Key of 1,2,3-12. This does not make a good Month key in SSAS as it is not unique for higher levels such as Year, Quarter, etc. There are a number of ways to tackle this common scenario. While the recommended approach would be to build a concatenation between Year-Quarter-Month as a Month Key in the dimension table, we can also achieve this by either selecting all of the columns as key columns for the attribute in the dimension attribute properties. However, this would give us a concatenated key in MDX and this could sometimes be undesirable. A yet simpler and cleaner solution is to concatenate the relevant columns in the DSV by using a Named Query. Instead of the typical

SELECT col1, col2,.., MonthKey, colx, coly, coly FROM DimDate

we can write

SELECT col1, col2,…,YearKey+QuarterKey+MonthKey AS MonthKey, colx, coly, coz FROM DimDate

This way we can use the MonthKey column directly as a key for our Month attribute.

While this is useful for a Date dimension, it can also be useful for any other composite key definition in our dimensions.

Other possible applications of DSV Named Queries and Named Calculations are the implementation of

  • Sort Order attribute, in cases when we need custom sort of the dimension attributes
  • Restricting the data which comes into the cube dynamically based on a certain condition (think of a Date dimension, which includes only relevant periods)
  • Combining tables – by a SQL join
  • Replacing 0s with NULLs (the opposite can be done automatically in SSAS) for our measures

Basically, in a DSV we can “correct” our data to make it suitable for our cube without changing the ETL.

Last but not least, we can also transform tables to conform to a star-schema-like design. If we want to show a proof of concept on top of a normalized OLTP database, we could avoid the ETL complexities, as well as building a datamart, and use SQL to join/split tables in dimension and fact tables, which are suitable for cube development. While this could work in post-POC scenarios, it would be better to take a cautious approach to it as there are many scenarios when it would either not work, or will be too slow.

And a word of warning – your DSV could become slow because of over-use of complex Named Queries. This could be painful when minimising cube processing time is crucial, or when the DSV starts timing out and queries take hours to execute. Luckily, in most cases we can simply move these large queries forward – to the ETL where we have more time and better tools (e.g. SSIS).

Passing database names to SSIS stored procedures

In the rare cases when we use dynamic SQL and want to use a database name in our code, we are better off avoiding hard-coding them. Unfortunately, I could not find an easy way to access a connection manager’s database name and on my current project the catalog name is not in the SSIS configurations XML file. Therefore, I had to resort to a little trick to pull the database name out and pass it to a stored procedure. In brief we can do the following:

1. Create a user variable database_name

2. Create an Execute SQL Task using the connection manager we want to get the database name from, which does:

 SELECT db_name() AS database_name
 
3. Map the Single Row result set to our database_name variable 

4. Place the task created in the previous step before any components which would be using the variable.

5. Pass the variable to our dynamic SQL stored procedure

 

 

There we go – a stored procedure configured in the SSIS package configurations – a bit better than just hard-coding the name.

SQL Server DBMS Top 1 Wish List

As an addition to Teo Lachev’s Top 10 Wishlists (SSAS and SSRS), I would like to contribute only 1 item to a possible SQL Server DBMS wishlist:

1. Source Control.

Not SourceSafe source control, but rather an automated version out-of-the-box, not relying on developers to check in/out. Rather, it should track the changes to the code as they are made, and a full version history should be available directly in the DBMS. It should not be too hard. After all, there is a nice database available, which can store code with its version numbers just like anything else.

This would make a lot of developers’ lives a bit less frustrating.

OK, a SQL code “beautifier” would also be nice, but it is not all that important…

Vote on Connect

Moving writeback data in the Fact tables and avoiding problems with changing column names

While writeback functionality in SQL Server Analysis Services 2008 has changed significantly and writeback values are stored in the OLAP cubes, in SSAS 2005 the writeback values are stored in a relational table on the same server with the fact tables. When the writeback functionality is enabled for a partition, a new table is automatically created which bears a prefix of WriteTable. Its structure is fairly simple: it contains a column for each dimension and two audit fields.

The ROLAP nature of the writeback table makes it inefficient for storage of a large number of writeback records, and it is sometimes required to consolidate the data it contains with the fact table.

Normally we can write a stored procedure, which can do this for us. Because the values in the WriteTable are deltas there is a new row for each user change. In example, if we change 0 to 5, there will be one row in the writeback table, which shows 5 as a measure value. If then we change the new value of 5 to 2, there will be a new row with a measure value of -3. Therefore, it could be more efficient to perform a quick aggregation of the values in the WriteTable while moving them in the fact table. This could also be contrary to our requirements if we want to be able to trace all data changes.

In either case, we end up with a number of new rows and we can insert these into our fact table, after which we can truncate our WriteTable and process our cube. There is a potential pitfall here. If we do not set up properly the processing settings, we could destroy our WriteTable and have it re-created, which in turn introduces another pitfall – SSAS may change our column suffixes. In example, if we have a fact table with the following definition:

CREATE TABLE [Fact_IndicatorAmount](
[Fact_IndicatorAmount_Id] [int],
[ETL_Date] [timestamp],
[Indicator_Id] [int],
[Region_Id] [int],
[Scenario_Id] [int],
[Date_Id] [datetime],
[High] [float],
[Low] [float],
[Amount] [float]
)

The WriteTable may be created like this:

CREATE TABLE [WriteTable_Indicator Amount](
[High_0] [float],
[Low_1] [float],
[Amount_2] [float],
[Indicator_Id_3] [int],
[Region_Id_4] [int],
[Scenario_Id_5] [int],
[Date_Id_6] [datetime],
[MS_AUDIT_TIME_8] [datetime],
[MS_AUDIT_USER_9] [nvarchar](255)
)

Note how the column names are the same as the fact table column names, but are suffixed with _1, _2, etc. Unfortunately, these may change with the re-creation of the WriteTable. SSAS tends to assign the suffixes randomly. If that happens, our consolidation stored procedures will break.

The obvious step to avoid this is to set up our cube processing correctly, making sure that the WriteTable does not get re-created. To do this, we can select Use Existing writeback table in the Change Settings… dialog, which allows us to change cube processing settings:

image

We can also script this action and use it in our automated cube processing SQL Server job.

Even though this is a relatively intuitive and simple solution, I have always had problems with it because of manual cube processing performed by power users, which do destroy the writeback data together with the WriteTable structure and following from that, the code in my stored procedures.

Through the utilisation of some dynamic SQL and SQL Server system tables information, we can write a stored procedure which does not depend on the suffixes of the column names in the writeback table:

CREATE PROCEDURE [usp_Consolidate_WriteBack_to_Facts]
AS
BEGIN
SET NOCOUNT ON;

DECLARE @Column_High nvarchar(50),
@Column_Low nvarchar(50),
@Column_Amount nvarchar(50),
@Column_Indicator nvarchar(50),
@Column_Region nvarchar(50),
@Column_Scenario nvarchar(50),
@Column_Time nvarchar(50)

SET @Column_High = (
SELECT syscolumns.name
FROM sysobjects
INNER JOIN syscolumns
ON sysobjects.id = syscolumns.id
WHERE sysobjects.xtype=’U’
AND sysobjects.name like ‘Write%’
AND syscolumns.name like ‘High%’
)

SET @Column_Low = (
SELECT syscolumns.name
FROM sysobjects
INNER JOIN syscolumns
ON sysobjects.id = syscolumns.id
WHERE sysobjects.xtype=’U’
AND sysobjects.name like ‘Write%’
AND syscolumns.name like ‘Low%’
)

SET @Column_Amount = (
SELECT syscolumns.name
FROM sysobjects
INNER JOIN syscolumns
ON sysobjects.id = syscolumns.id
WHERE sysobjects.xtype=’U’
AND sysobjects.name like ‘Write%’
AND syscolumns.name like ‘Amount%’
)

SET @Column_Indicator = (
SELECT syscolumns.name
FROM sysobjects
INNER JOIN syscolumns
ON sysobjects.id = syscolumns.id
WHERE sysobjects.xtype=’U’
AND sysobjects.name like ‘Write%’
AND syscolumns.name like ‘Indicator%’
)

SET @Column_Region = (
SELECT syscolumns.name
FROM sysobjects
INNER JOIN syscolumns
ON sysobjects.id = syscolumns.id
WHERE sysobjects.xtype=’U’
AND sysobjects.name like ‘Write%’
AND syscolumns.name like ‘Region%’
)

SET @Column_Scenario = (
SELECT syscolumns.name
FROM sysobjects
INNER JOIN syscolumns
ON sysobjects.id = syscolumns.id
INNER JOIN systypes
ON syscolumns.xtype=systypes.xtype
WHERE sysobjects.xtype=’U’
AND sysobjects.name like ‘Write%’
AND syscolumns.name like ‘Scenario%’
)

SET @Column_Time = (
SELECT syscolumns.name
FROM sysobjects
INNER JOIN syscolumns
ON sysobjects.id = syscolumns.id
WHERE sysobjects.xtype=’U’
AND sysobjects.name like ‘Write%’
AND syscolumns.name like ‘Date%’
)

DECLARE @SQL_Command nvarchar(4000)
SET @SQL_Command = (‘
INSERT INTO [Fact_IndicatorAmount]
([High]
,[Low]
,[Amount]
,[Indicator_Id]
,[Region_Id]
,[Scenario_Id]
,[Date_Id])
SELECT ‘+ @Column_High +’
,’+ @Column_Low +’
,’+ @Column_Amount +’
,’+ @Column_Indicator +’
,’+ @Column_Region +’
,’+ @Column_Scenario +’
,’+ @Column_Time +’
FROM [WriteTable_Indicator Amount]’)

EXEC (@SQL_Command)

TRUNCATE TABLE [WriteTable_Indicator Amount]
END

What we are effectively doing here is getting the column names from the WriteTable and then constructing an INSERT statement based on these. It is dangerous to further automate this by a while loop, as the actual column names in the WriteTable can differ from the ones in the fact table. This could happen if the dimension table key names are different to the fact table key names.

Moving writeback rows through this stored procedure ensures that even if the WriteTable for a partition is re-created for some reason our code can handle it.

Spreading Non-Transactional Data Along Time

In some cases we need to be able to analyse non-transactional data for discrete periods along a time dimension. An example of such a case is a collection of invoices, which have start and end dates for a period, but are not otherwise connected to a time axis. We may have such invoices with these properties:

Invoice Id
Start Date
End Date
Amount

One of the invoices may be:

Invoice Id: 34821432
Start Date: 2008-10-15
End Date: 2009-03-14
Amount: 15,000.00

and another one:

Invoice Id: 34934221
Start Date: 2008-12-01
End Date: 2009-05-30
Amount: 6,500.00

If the company we are building this for is daily deducting a fee for its services (e.g. funds management, software maintenance, etc.), we may have to be able to spread the amount in smaller periods, like months or days and then aggregate the smaller amounts along a time dimension.

To do this we have to first store the data in a relational table and then write some SQL to do the trick for us.

First, we should create a table valued function which returns all the dates at a specified granularity, such as days, from the Start to the End dates and the count of all the periods in between (in our case is is a count of days):

CREATE FUNCTION udf_Create_Daily_Date_Spread
(   
      @Start_Date datetime
    , @End_Date datetime
)
RETURNS @Daily_Spread TABLE (
      Date_Id datetime
    , Count_Of_Days int
)
AS
BEGIN
    DECLARE @Count int
    SET @Count = 0

    IF @Start_Date >= @End_Date
        RETURN

    WHILE @Start_Date <= @End_Date
    BEGIN
        INSERT INTO @Daily_Spread(Date_Id)
        SELECT @Start_Date

        SET @Start_Date = DATEADD(d, 1,@Start_Date)
        SET @Count = @Count + 1
    END

    UPDATE @Daily_Spread
    SET   Count_Of_Days = @Count

    RETURN
END

After having created these functions, we can use the CROSS APPLY statement to create the even spread:

SELECT             Invoice_Id
                        ,Start_Date
                        ,End_Date
                        ,cdds.Date_Id
                        ,Amount/cdds.Count_Of_Days
FROM Invoice_Source inv
CROSS APPLY udf_Create_Daily_Date_Spread(inv.Start_Date, inv.End_Date) cdds

After running the sample data through this code, we will get an even spread for both invoices and we will be able to attach a time dimension to them.

Even though the data size may explode after such a manipulation, Analysis Services provides an excellent way of handling even the largest sets of data. If storage is a problem, we can always choose to break down our data in less periods – instead of days, weeks or months.

Ordering Dimensions: Recursion vs Loops in T-SQL

Recently I had to build a function, which returned dimension members from a parent-child relational table. I needed to be able to order the members hierarchically and I decided to use a recursive CTE call.
Usually, we have a parent-child dimension table with structure similar to this:
Dim_Business
Business_Skey int
Parent_Business_Skey int
 
with a root node with Skey of -3 and Parent_Business_Skey of -3.
Let’s assume we have a hierarchy like this:
All (Id = -3, P_ID = -3)
-GLOBAL (Id = 1, P_ID = -3)
–Europe (Id = 2, P_ID = 1)
—UK (Id = 3, P_ID = 2)
—France (Id = 4, P_ID = 2)
—Spain (Id = 5, P_ID = 2)
–North America (Id = 6, P_ID = 1)
—USA (Id = 7, P_ID = 6)
—Canada (Id = 8, P_ID = 6)
—Mexico (Id = 9, P_ID = 6)
 
If we do something like:
SELECT Business_Skey
FROM Dim_Business
ORDER BY Parent_Business_Skey ASC, Business_Skey ASC
 
We will get:
-3 (All)
1 (GLOBAL)
2 (Europe)
6 (North America)
3 (UK)
4 (France)
5 (Spain)
7 (USA)
8 (Canada)
9 (Mexico)
Obviously, this hierarchy is incorrect, because we want to see the leaf nodes under their respective parents.
We can recursively create order, which concatenates the parent ids:
WITH b_ord(bid, bord)
AS
(
SELECT  Business_Skey AS bid
 , CONVERT(nvarchar(1000), Business_Skey) AS bord
FROM Dim_Business
WHERE Business_Skey = -3

UNION ALL

SELECT  Business_Skey AS bid
 , CONVERT(nvarchar(1000), bord + ‘|’ + CONVERT(nvarchar, Business_Skey))
FROM Dim_Business db
 INNER JOIN b_ord bo
 ON db.Parent_Business_Skey = bo.bid
WHERE db.Business_Skey <> bo.bid
)
SELECT *
FROM b_ord
ORDER BY bord ASC
 
The result of the CTE query is:
-3 -3 (All)
1 -3|1 (GLOBAL)
2 -3|1|2 (Europe)
3 -3|1|2|3 (UK)
4 -3|1|2|4 (France)
5 -3|1|2|5 (Spain)
6 -3|1|6 (North America)
7 -3|1|6|7 (USA)
8 -3|1|6|8 (Canada)
9 -3|1|6|9 (Mexico)
 
and the order is correct.
 
Because the code needed to go in a function, invoked by a number of stored procedures, .NET application and various reports, I needed the code to be quick and 
light. As some dimensions had a large number of members (50000+), which could grow with time, the code needed to implemented in a careful way. So, I decided 
to compare the recursive CTE function to a WHILE loop and a temporary table implementation:
DECLARE @c int
DECLARE @num_of_nodes int

SET @num_of_nodes = (SELECT  COUNT(*) FROM Dim_Business)

CREATE TABLE #order(
  skey int
 , ord nvarchar(1000)
 , lvl int
)

INSERT INTO #order
SELECT  Business_Skey
 , CONVERT(nvarchar(1000), Business_Skey)
 , 1
FROM Dim_Business
WHERE Business_Skey = -3

SET @c = 2

WHILE @c > 0
BEGIN
 INSERT INTO #order
 SELECT  Business_Skey
 , CONVERT(nvarchar(1000), ord + ‘|’ + CONVERT(nvarchar, Business_Skey))
 , @c
 FROM Dim_Business db
 INNER JOIN #order o
 ON db.Parent_Business_Skey = o.skey
 AND o.lvl = @c – 1
 WHERE db.Business_Skey <> o.skey

 SET @c = @c + 1
 
 IF (SELECT COUNT(*) FROM #order) = @num_of_nodes
 SET @c = 0
END

SELECT  skey AS bid
 , ord AS bord
FROM #order
ORDER BY ord ASC

DROP TABLE #order
 
After comparing the results in Client Statistics and the IO reads, the 1st recursive query performs more than 20% worse than the WHILE loop query. It also trails the non-recursive query in the count of logical reads,read-ahead reads and scan counts.
It seems like in SQL Server 2005 calling WITH recursively does not work as good as coding set-based operations through WHILE loops.

Fun with T-SQL

As a tribute to some fairly restrictive .NET design I once fell victim to, I wrote this piece of code:

Use [Database]
Select [Select]
From [From]
Where [Where] = ‘Where’
And [nvarchar(50)] = ‘nvarchar(50)’
And [int] = 1
The CREATE and INSERT statements for the above command:
 
CREATE TABLE [dbo].[From](
 [Id] [int] NOT NULL,
 [Where] [nvarchar](50) NOT NULL,
 [On] [nvarchar](50) NOT NULL,
 [Select] [nvarchar](50) NOT NULL,
 [From] [nvarchar](50) NOT NULL,
 [int] [int] NOT NULL,
 [nvarchar(50)] [nvarchar](50) NOT NULL
)

INSERT INTO [From] (  [Id]
, [Where]
, [On]
, [Select]
, [From]
, [int]
, [nvarchar(50)])
SELECT 1, ‘Where’, ‘On’, ‘:-)’, ‘From’, 1, ‘nvarchar(50)’

Enjoy!

  
 

EXECUTE AS someone else – impersonation in stored procedures

Recently, I had to implement a stored procedure, which is executed under a service account context. It had to be invoked by an Informatica workflow and its purpose was to clean a cache database table and to update all statistics on a database through invoking the sp_updatestats. As the Informatica user did not have enough permissions to perform these tasks, I had to use the EXECUTE AS in the stored procedure. The EXECUTE AS statement can also effectively be used to limit the execution scope through allowing limited permissions to the user account executing the stored procedure:
CREATE PROCEDURE [dbo].[usp_Update_Statistics] 
WITH EXECUTE AS user
AS
BEGIN
 clear cache
 EXEC sp_UpdateStats
END
Immediately I encountered various problems with the execution of the stored procedure. After some research it turned out that a few preconditions need to be satisfied before we can execute it:
1. TRUSTWORTHY setting on the database needs to be turned on:
     ALTER DATABASE database SET TRUSTWORTHY ON;
2. The owner of both the context database and the master database must be the same. To set this property we need to:
   2.1 Find who is the owner of master:
           exec sp_helpdb
   2.2 Set the context database owner to the same owner:
           ALTER AUTHORIZATION ON DATABASE::database TO user
3. The account under which we EXECUTE AS needs to be a database/server principal. The database and server principals can be found in the sys.database_principals and the sys.server_principals tables.
4. The account needs to be granted impersonate permissions:
     GRANT IMPERSONATE ON USER::user in EXECUTE AS TO user executing the sproc
5. If a server login is specified (instead of a database user), it needs to be mapped to a database user.
Also, if SQL Server is running under a local account/service it is not possible to use the EXECUTE AS statement.
The same statement can be used to execute SQL Server Agent jobs under different context – something particularly useful when trying to run them through Integration Services.