Write a Variable To a File Using SSIS

Problem

I’ve a SSIS package. When it fails execution, I want to write a custom message including the package name and error description to a text file.

Solution

One way to accomplish this is to declare a variable, populate it with the message you want, and write it’s value to a file using a script task. Then add this script task to an OnError event handler so it executes in response to an error in the SSIS package.

1) Declare a variable: I’ve declared a package scoped variable called vFullErrorMsg.

2) Configure an OnError event handler for the package. So, the tasks in the event handler will execute when the package fails.

3) Add a script task: I renamed the task to Write To Log On Failure.

4) Edit Script Task: Add ErrorDescription,PackageName as a read only variable and vFullErrorMsg as a read write variable.

5) Use a script similar to the one below:

Public Sub Main()

‘declare a variable for error description
‘write DTS variable ErrorDescription to local variable vErrorDescription
Dim vErrorDescription As String = Dts.Variables(“ErrorDescription”).Value.ToString

‘declare a variable for package name
‘write DTS variable PackageName to local variable vPackageName
Dim vPackageName As String = Dts.Variables(“PackageName”).Value.ToString

‘Write error string to DTS variable
Me.Dts.Variables(“vFullErrorMsg”).Value =
“ERROR: ” & ” Package [” & vPackageName & “] Failed” & “. Full Description: ” & vErrorDescription

‘ write DTS variable to a file using stream writer
Using sw As StreamWriter = New StreamWriter(“D:\Documents\ErrorLog.Txt”, True)
sw.WriteLine(Dts.Variables(“vFullErrorMsg”).Value.ToString)

End Using

Dts.TaskResult = ScriptResults.Success
End Sub

With this set up, you should see the error message written to D:\Documents\ErrorLog.Txt on your local drive. You can use variables or expressions to make it work all dynamically.

:<)

Related Post: Read a flat file to a variable.

Here is my Twitter feed (@SamuelVanga) if you would like to follow me.

Creating Your First SSAS 2012 Tabular Project

No matter how simple it is, new things come with some confusion; at least until you get used to it. Analysis Services 2012 Tabular is no different. This post will look at steps to create your first Tabular project.

Let’s start with firing up SQL Server Data Tools located under Microsoft SQL Server 2012 RC0. I’ve selected to create a new project.

SQL Server 2012 SSAS Tabular Project

In the above image, I’ve selected Analysis Services Tabular Project.

You might see a error message similar to the one below. This happens when the server instance specified in the Workspace Server property doesn’t have an analysis services running in tabular mode. For instance, in this example the name of my instance is localhost\SQL110, but I purposefully changed it to an incorrect name to show this error.

SSAS Tabular Workspace database server error

To fix this, I’ve changed Workspace Server to the correct one from the properties of model.bim

image

Importing Data

The Model option located on the top allows you to create model files. You’ll begin with importing data from a source.

image

The next screen lets you select a type of data source. I’ve selected Microsoft SQL Server. In the next screen, you choose a server name and database name as the source.

You’ll provide impersonation information in the next screen. This is important because, analysis services uses this account to connect to the data source. So, make sure that this account has required permissions on both the server instances: data source and analysis services.

Next screen asks how you want to import the data. You can either choose from a list of tables or view, or specify a query to retrieve the data. I’ve opted to select from a list.

From the list of tables and views in the below image, I’ve selected FactInternetSales, DimProduct, DimProductSubCategory, DimProductCategory, DimCustomer, and DimGeography.

image

Clicking finish will complete by the data import and the next screen displays a status of the tables and views that were imported.

image

Toggle to the diagram view to see the tables and relationships among the objects in the model. In the below image, you’ll notice the wizard identifies the relationships in the data source.

image

Toggle to the grid view to see the data that was imported from data source to the model. This interface looks similar or Excel workbooks with one tab for each table in the data source.

image

Create Measures

The first thing you should do after importing data is to create measures. I’ll select Sales Amount and choose Sum from the measure drop down in the following image.

image

I’ve repeated the same steps for Tax Amount, to create another measure.

Deploy To Target Server

Before deploying to the server, take a few seconds to verify the target server properties. Target Analysis Services must be running in tabular mode.

In the below image, I’ve selected to deploy the model.

image

Browse Your First Tabular Model

I selected Analyze in Excel option located on the top. An excel pivot table with a connection to the model you just deployed allows you to analyze data in Excel.

image

Conclusion

Well, Well, Well. I know this has been a lengthy post. So I’ll let you go with no further ado, but please use the comments below should you have any questions or comments.

~Sam

Be careful with two digit years

I recently had a scary conversation with a user, which can be broken down to:

User: Why would the data have customers with a birth date of MM/DD/2020?

Me: What are you talking about? Lemme take a look.

Me again, this time with a changed voice: Umm, Umm, i plead guilty with an explanation your honor!

User: Let’s hear your explanation son.

Me: We receive a raw file with customer data. Date of birth is a field in that file, and it is supposed to contain data in ‘DD-Month-YYYY’ format. But, apparently, one fine day the file contained two digits for year instead of four. My import process decided that the two digit year 20 is 2020 and not 1920.

User: Okay, I’ll let you go this time, but fix it. Won’t you?

<end of story>.

Turns out, SQL Server automatically uses current century for two digit years less than 50, and previous century for years greater than or equal to 50.

SELECT CAST('01-January-49' AS DATETIME) [<50]

Result:

<50
———————–
2049-01-01 00:00:00.000

(1 row(s) affected)

SELECT CAST('01-January-50' AS DATETIME) [>50]

Result:

<50
———————–
1950-01-01 00:00:00.000

(1 row(s) affected)

Here is the books online explanation for the two digit year cutoff. Benjamin Nevarez (t|b) pointed me to this link when i asked for help on twitter using the #sqlhelp hashtag. Thanks Benjamin.

Cheers!

@SamuelVanga.

You will also like If i learn something, sorry, but you have to read it!

LAST_VALUE() Analytic Function–SQL Server 2012

Earlier, i discussed FIRST_VALUE() function. FIRST_VALUE() returns the first value from an ordered set of records. Similarly LAST_VALUE() returns the last value for an ordered result set, but you are in for a bit of a surprise if you aren’t familiar with the enhancements to the SQL windows functions.

I’ll use the same example data i used earlier.

CREATE TABLE Customers
(CustId INT, OrderDate DATE, OrderAmount MONEY) ;

INSERT Customers VALUES
(1, ‘2011-12-03’, 12500)
, (
1, ‘2011-10-07’, 15000)
, (
1, ‘2011-09-12’, 16000)
, (
2, ‘2011-07-11’, 900)
, (
2, ‘2011-09-13’, 18000)
, (
2, ‘2011-11-03’, 7000)
, (
3, ‘2011-01-29’, 19000)
, (
3, ‘2011-04-19’, 1000)
, (
3, ‘2011-07-09’, 12000) ;

We wrote a query using FIRST_VALUE() to calculate the date of first purchase for each customer.

SELECT
c.CustId, c.OrderDate, c.OrderAmount
, FIRST_VALUE(OrderDate) OVER(PARTITION BY CustId ORDER BY OrderDate) CustomerSince
FROM
Customers c

Now, let’s say we want to know the recent purchase date for each customer. Similar to FIRST_VALUE(), there is a LAST_VALUE() function. LAST_VALUE() allows you to return the last value from an ordered result set.

I’ll follow my killer instincts and replace FIRST_VALUE() with LAST_VALUE() in the above query. It should work. Right?

SELECT
c.CustId, c.OrderDate, c.OrderAmount
, LAST_VALUE(OrderDate) OVER(PARTITION BY CustId ORDER BY OrderDate) CustomerSince
FROM Customers C

Result Set:

… but, oops, this is not the result i was expecting.

There are two points to note here. Window and Frame.

A window is a set of records that are defined by the OVER() clause. Because of the PARTITION BY Custid, you can safely think records for each Custid form a window in this example.

A Frame is a range of rows within a window; a subset of a window in other words, defined by the ROWS clause. A range has boundaries – a begin position and an end position.

Look back at the query using LAST_VALUE(). You’ll notice there wasn’t a ROWS clause. We haven’t defined a frame. In this case SQL Server uses the default frame which is, between the first row without any boundaries (first row of the window partition) and the current row. Last value from this default frame will always be the current row.

Correct usage of the function should instead be:

SELECT
c.CustId, c.OrderDate, c.OrderAmount
, LAST_VALUE(OrderDate) OVER(PARTITION BY CustId ORDER BY OrderDate ROWS BETWEEN  CURRENT ROW AND UNBOUNDED FOLLOWING)
FROM Customers C

Given all this, FIRST_VALUE() is providing expected results even with the default frame because the first row in the default frame is the first row of the window partition.

Here is an interesting article from Jeremiah Peschka (blog) for further reading: Leaving the Windows Open.

~Sam

Follow Sam on Twitter – @SamuelVanga