Tuesday, June 25, 2013

Windows 8, Client Hyper-V & Networking

A while ago I upgraded my laptop to Windows 8, mostly because I wanted to trial using Client Hyper-V in comparison with Oracle VirtualBox which I had been using previously with some success, albeit with the occasional VM corruption - which is never a happy occurrence.

One of my biggest frustrations with Client Hyper-V has been its networking: VirtualBox networking more or less just works, but in comparison I've found Client Hyper-V networking much more hit-and-miss. In the simplest scenario, I wanted to be able to have a VM hosted in Client Hyper-V on my laptop, and to be able to have both my laptop, and the VM, able to see the Internet at the same time. Yea, I know, doesn't sound like much to ask.

With the help of http://geekswithblogs.net/bjackett/archive/2010/06/06/how-to-configure-remote-desktop-to-hyper-v-guest-virtual-machines.aspx I could run through the Virtual Switch Manager and create Internal and External Virtual Switches and have both my laptop and VM see the Internet and also be able to RDP to the VM, but each time I rebooted for some reason these settings were lost, and my laptop in particular could no longer see the Internet.

Thanks to the following post (and comments) I've been able to create a PowerShell script to rectify this each time my laptop is rebooted: http://blog.siliconvalve.com/2012/09/26/fix-windows-8-and-hyper-v-virtual-switch-problems/.

The PowerShell script simply disables & re-enables the "vEthernet (*)" network adapters for my Wi-fi and Ethernet created by Hyper-V Manager.

Disable-Netadapter -Name “vEthernet (WiFi)” -Confirm:$false
Enable-Netadapter -Name “vEthernet (WiFi)” -Confirm:$false

Disable-Netadapter -Name “vEthernet (Ethernet)” -Confirm:$false
Enable-Netadapter -Name “vEthernet (Ethernet)” -Confirm:$false


I then schedule this script to be executed each time my laptop starts via Windows Task Scheduler.

On the "General" tab illustrated below, the key parts are to ensure you select an account with appropriate permisssions (eg a member of the local Administators group), select the option to run whether a user is logged on or not, and to run with highest privileges.

 
On the "Triggers" tab illustrated below, add a trigger for the task to run at system startup.

 
On the "Actions" tab illustrated below, add a new action to execute the PowerShell script.

 
The action needs to call the PowerShell executable, and pass in the name of the file containing the PowerShell script to be executed.


The key bits here are to set the program/script field to the path to the PowerShell executable, something like "%windir%\System32\WindowsPowerShell\v1.0\powershell.exe", and to set the value of the arguments field to the path to the file containing the PowerShell script. Most importantly (at least in my experience) you need to wack an ampersand (&) character at the start of the value for the arguments field, and enclose the path to the file in single quote (') characters, for example "&'C:\Reset Network Bridge Adapter.ps1'" (excluding double quote characters).

Hope this helps someone.

Saturday, June 1, 2013

SSIS: Time zone conversion

I needed to convert date/time data within a SSIS data flow from US Central Standard Daylight Time (UTC-5:00) to my local timezone which could vary between AU Central Standard Time and AU Central Standard Daylight Time (UTC+9:30 or UTC+10:30). The source data was guaranteed to always be UTC-5:00, so I only needed to allow for the potential variance for AU daylight savings.

I used a SSIS script transform to achieve the desired result:



And the key code being contained in the ProcessInputRow method of the script transform:

public override void Input_ProcessInputRow(InputBuffer Row)
{
  Row.PurchaseDateLocalTime = TimeZoneInfo.ConvertTimeBySystemTimeZoneId(

    new DateTimeOffset(
      Row.PurchaseDate,
      new TimeSpan(-5, 0, 0)),
    "Cen. Australia Standard Time")
  .DateTime;
}

Thursday, March 14, 2013

Gotcha: BizTalk WCF-SQL Adapter & Table Variables

By now you probably know that the BizTalk WCF-SQL adapter doesn't like temporary tables in stored procedures when generating metadata (eg, refer to Thiago Almeida's post here http://connectedthoughts.wordpress.com/2011/01/18/wcf-sql-versus-the-old-sql-adapter/).

There are a couple of widely documented work-arounds, the two most common being the use of the FMTONLY setting in SQL Server (as described in Thiago's post) or replacing the use of temporary tables with table variables instead.

If you do decide to head down the path of replacing temporary tables in your stored procedures with table variables, something to be aware of is how variables are scoped within T-SQL.

Unlike other languages you might be familiar with such as C#, variables in T-SQL are scoped to the batch (or in the case of a stored procedure, to the entire stored procedure). So if say you have a loop in your stored procedure, and you declare a variable inside the loop, you may not get the results you expected. Because the scope of the variable is the entire stored procedure, it's not re-initialised each time around the loop (unless you do that explicitly), it's only re-declared - and in T-SQL, that means if it's already been declared the first time, there's nothing to do.

Where this ties in to the WCF-SQL adapter and the approach of replacing temporary tables in stored procedures with table variables is that a common scenario you may have is a T-SQL loop, where inside the loop's block you declare a temporary table, do some stuff, and at the end of the block you drop the temporary table (prior to the next iteration of the loop).

Unfortunately, when you replace this with a table variable instead, you end up with a similar statement to create the table variable at the start of the loop's block (replacing the CREATE TABLE... with a DECLARE @table TABLE... instead) - but there's no explicit way (at least that I know of) to do the equivalent of a DROP TABLE at the end of the block. If you combine this with T-SQL's scoping rules though, you can end up with some unexpected results.

For example:
 

Hmmm... If I was coming from a C# background, I might have expected the results to be somewhat different, more along the lines of a single record containing the iteration number each time around the loop. That's certainly what we would have got with the previous approach using create/drop of temporary tables.

The results are due to the @table table variable being re-declared but not re-initialised each time around the loop. So, how can we produce the behaviour of the previous temporary table approach without the ability to explicitly drop ("undeclare") the table variable? Well, the best I've come up with is to declare the table variable, and then immediately following it to include a DELETE statement to delete any records in the table variable. The first time around the loop of course this does nothing, but on subsequent iterations it deletes any records still present in the table variable from the previous iteration.

Using this approach we get the results expected:


HTH!

Tuesday, March 12, 2013

Using a custom SSIS script component source to process NetCDF files

In case you were interested...

Recently I was involved in a data warehousing project with a number of SSIS-based ETL processes loading data into a dimensional model in the data warehouse from a variety of data sources. One of these data sources was weather condition forecast data (daily minimum & maximum temperatures, average rainfall) sourced from a government bureau in NetCDF format on a daily basis.

Prior to this project I'd never even heard of NetCDF before, but nevertheless we needed to be able to work with data in this format as part of our ETL processes. We discovered the Microsoft SDS libraries are the preferred mechanism for interacting with data in NetCDF format from the .NET platform. Although these libraries are part of a Microsoft Research initiative and may not be used for commercial use, we obtained permission from Microsoft to use the libraries for our purposes.
 

We considered the following alternatives:
  1. Custom .NET web service leveraging the Microsoft SDS libraries (T-SQL INSERT)
  2. Custom .NET web service leveraging the Microsoft SDS libraries (SqlBulkCopy)
  3. Custom SSIS script component source leveraging the Microsoft SDS libraries
Originally it was proposed to utilise the Custom .NET web service leveraging the Microsoft SDS libraries (T-SQL INSERT) alternative, with a SSIS package downloading and extracting each forecast NetCDF file prior to invoking the web service for subsequent processing.

A prototype ASP.NET (ASMX) web service was constructed with a single LoadNetCDF service operation. The service used the Microsoft SDS libraries to read the source NetCDF file and dynamically constructed a T-SQL INSERT statement to insert the data into a specified table within a specified database on a specified SQL Server instance.

While this alternative provided a potentially reusable web service that could fit within a service-oriented architecture, it introduced an additional dependency upon the infrastructure required to host the web service. Additionally during testing the web service was observed to perform relatively poorly, with the SSIS package unable to be configured with a large enough timeout to await the web service response (> 300 seconds).

As a result, a number of additional alternative approaches were considered.

The Custom .NET web service leveraging the Microsoft SDS libraries (SqlBulkCopy) alternative retained the usage of the custom web service from SSIS, simply replacing the use of dynamic T-SQL INSERT statements with usage of the .NET SqlBulkCopy class instead. Rather than connecting to the database and executing an INSERT statement for each candidate record extracted from the source NetCDF file, this approach progressively constructed an in-memory .NET DataTable containing all of the data extracted from the source NetCDF file, prior to using the .NET SqlBulkCopy class to perform a bulk insert of the data into the database. This alternative exhibited significantly improved performance (from > 300 seconds to < 30 seconds).

The Custom SSIS script component source leveraging the Microsoft SDS libraries alternative replaced the SSIS web service call with a standard SSIS data flow task. The data flow task was responsible for extracting the forecast data from its source NetCDF file and inserting it into the target database. The key to this alternative was the use of a SSIS script component as the data source for the data flow task. The script component accepted a single variable parameter containing the path to the source NetCDF file. It then utilised .NET code extracted from the prototype web service to read data from the NetCDF file using the Microsoft SDS libraries and write this data to a SSIS output buffer, making it available for stream-based consumption within the data flow task. This alternative also exhibited significantly improved performance (from > 300 seconds to ≈ 10 seconds).

The Custom SSIS script component source leveraging the Microsoft SDS libraries alternative was selected as the preferred alternative as it removed the infrastructure dependency for hosting the custom web service and significantly improved performance. In addition, although the web service approach may have formed a potentially reusable component within a service-oriented architecture, the design of the prototype web service did not meet the required levels of reusability typically associated with such goals and it was not within the scope of the project to provide such.

Monday, February 25, 2013

ESB Toolkit 2.1: Default Resolver Service Url Property

Not a biggie, but I couldn't find an answer to this on the net, so I thought I'd write about it...

A while ago I was doing some work with the BizTalk ESB Toolkit 2.1, creating a LOT of itineraries in the Visual Studio itinerary designer. Naturally, I'd chosen to live in a world of pain and not install the ESB Toolkit web services into the "Default Web Site" living at port 80 in IIS (I know, who would ever want to do that!). Instead, I'd installed them to my own "ESB Site", living on port 82.

Unfortunately, the Visual Studio itinerary designer defaults the Resolver Service Url property assuming you've deployed the ESB web services to port 80 - the default value for this property is "http://localhost/ESB.ResolverServices.WCF/ResolverService.svc" . This meant that for each itinerary I created in the designer, I needed to manually change the Resolver Service Url property to  "http://localhost:82/ESB.ResolverServices.WCF/ResolverService.svc" instead.


Yea, I know, not a big deal, but it irked me that it wasn't smart enough to realise I'd deployed the services to port 82. I held out hope that there was something I was missing - some magic config setting or Registry key I'd missed that set the default value for this property. At the time I didn't have enough bandwidth to investigate any further, until recently...

And the answer is... It looks like this can't be changed, the default value for the property is set via a DefaultValue attribute applied within the code itself... Leaping into ILSpy and disassembling the Microsoft.Practices.Services.ItineraryDsl.ItineraryModel.ResolverServiceUrl property within the Microsoft.Practices.Services.Itinerary.Dsl assembly, we can see the following:

[Editor(typeof(UrlEditor), typeof(UITypeEditor)), TypeConverter(typeof(TypeConverter)), DisplayNameResource("Microsoft.Practices.Services.ItineraryDsl.ItineraryModel/ResolverServiceUrl.DisplayName", typeof(ItineraryDslDomainModel), "Microsoft.Practices.Services.ItineraryDsl.GeneratedCode.DomainModelResx"), CategoryResource("Microsoft.Practices.Services.ItineraryDsl.ItineraryModel/ResolverServiceUrl.Category", typeof(ItineraryDslDomainModel), "Microsoft.Practices.Services.ItineraryDsl.GeneratedCode.DomainModelResx"), DescriptionResource("Microsoft.Practices.Services.ItineraryDsl.ItineraryModel/ResolverServiceUrl.Description", typeof(ItineraryDslDomainModel), "Microsoft.Practices.Services.ItineraryDsl.GeneratedCode.DomainModelResx"), DefaultValue("http://localhost/ESB.ResolverServices.WCF/ResolverService.svc"), DomainObjectId("07aaf7c2-0163-47fd-a118-57539e8d3eba")]
public string ResolverServiceUrl


Oh well... Anyway, I hope this saves someone some time performing the fruitless search I went through - or that someone can tell me I've missed something obvious and there is a way to change this default.

Wednesday, July 25, 2012

Formatting DateTime values from another timezone

For a while I've been battling with formatting DateTime values generated in another timezone using .NET's DateTime string formatting capabilities. For instance, I have an XSD datetime value of "2012-07-26T00:00:00+10:00" (generated in Australian Eastern Standard Time), but I'm in a different timezone (Australian Central Standard Time, +09:30). If I parse the XSD datetime value into a .NET DateTime, it automatically gets converted to my timezone: "2012-07-25T23:30:00+09:30". If I then wanted to get just the date value (in the original timezone), I'd actually get the wrong date!!!

Today I finally stumbled across the DateTimeOffset class, thanks to http://geekswithblogs.net/BlackRabbitCoder/archive/2012/03/08/c.net-little-wonders-ndash-the-datetimeoffset-struct.aspx.

Basically this struct preserves the datetime in its original timezone, including offset. So for example (from LINQPad):


So, if you want to work with dates in their original timezone, DateTimeOffset is your friend!! This can be particularly important when you're working with XSD datetimes as you commonly do in BizTalk solutions, and need to format datetime values, but without converting between timezones.

Error importing WSDL file into BizTalk solution

Today I was importing a WSDL file provided by a third party web service into a Visual Studio BizTalk solution using the "Metadata Files" option in the BizTalk WCF Service Consuming Wizard and ran into the following error:

"Unable to add metadata file ... (System.InvalidOperationException) There is an error in XML document (1, 40). (System.Xml.XmlException) Name cannot begine with the '.' character, hexadecimal value 0x00. Line 1, position 40."



I opened the file and took a look at line 1, position 40: looked perfectly fine to me. The only thing I noticed when I opened the file was that the file's encoding property (in the Visual Studio properties window) changed when the file was opened in Visual Studio: from Unicode, to UTF-8 (consistent with the encoding specified in the xml declaration).

I saved the modified file and re-ran the BizTalk WCF Service Consuming Wizard, and the file imported fine. I then re-ran the Wizard again with the original file, and the same error occurred. I also tried the wizard using the "Metadata Exchange" option (to browse directly to the live WSDL) and encountered the same error.

I can only conclude that the original WSDL file was generated in the Unicode encoding, which the BizTalk WCF Service Consuming Wizard didn't like, presumably as it wasn't consistent with the encoding specified in the XML declaration.

Anyway, hope this might help someone out there.