Uploading XL Deploy DAR package via PowerShell

Introduction

Recently at my customer’s site, I started using XebiaLabs product called XL Deploy. It is a software solution that helps automating the deployment process. Many operations are exposed via a REST API. Uploading of packages is one of them and is done following the Multipart/form-data standard. In case you need to do so, via PowerShell, you may be surprised about the difficulty of something that appears to be a trivial task. As you may know, PowerShell doesn’t play well with Multipart/form-data requests. I will show you a cmdlet that I wrote that can be handy in accomplishing this task.

Welcome Send-Package cmdlet

Before showing you the actual cmdlet that will send a package to XL Deploy I need to mention that the core of this cmdlet is Invoke-MultipartFormDataUpload cmdlet about which I blogged earlier.

First of all two small helper cmdlets. As the file name needs to be specified in the URL, we need to encode it. This may be done with some simpler code for this particular case, however I will show you a cmdlet that I’m using also for other XL Deploy calls where this approach is necessary.

<############################################################################################ 
    Encodes each part of the path separately.
############################################################################################>
function Get-EncodedPathParts()
{
    [CmdletBinding()]
    param
    (
        [string][parameter(Mandatory = $true)]$PartialPath
    )
    BEGIN { }
    PROCESS
    {
        return ($PartialPath -split "/" | %{ [System.Uri]::EscapeDataString($_) }) -join "/"
    }
    END { }
}

The other one will make sure that the URL passed in is formatted correctly for XL Deploy and that is a valid URL.

<############################################################################################ 
    Verifies that the endpoint URL is well formatted and a valid URL.
############################################################################################>
function Test-EndpointBaseUrl()
{
	[CmdletBinding()]
	param
	(
		[Uri][parameter(Mandatory = $true)]$Endpoint
	)
	BEGIN
	{
		Write-Verbose "Endpoint = $Endpoint"
	}
	PROCESS 
	{
		#$xldServer = $serviceEndpoint.Url.AbsoluteUri.TrimEnd('/')
		$xldServer = $Endpoint.AbsoluteUri.TrimEnd('/')

		if (-not $xldServer.EndsWith("deployit", "InvariantCultureIgnoreCase"))
		{
			$xldServer = "$xldServer/deployit"
		}

		# takes in consideration both http and https protocol
		if (-not $xldServer.StartsWith("http", "InvariantCultureIgnoreCase"))
		{
			$xldServer = "http://$xldServer"
		}

		$uri = $xldServer -as [System.Uri] 
		if (-not ($uri.AbsoluteURI -ne $null -and $uri.Scheme -match '[http|https]'))
		{
			throw "Provided endpoint address is not a valid URL."
		}

		return $uri
	}
	END { }
}

This cmdlets and the one I described in the previous post are required to be available for our new upload cmdlet. After having assured that they are available we can write a cmdlet I named Send-Package.

<############################################################################################ 
    Uploads the given package to XL Deploy server.
############################################################################################>
function Send-Package()
{
    [CmdletBinding()]
    PARAM
    (
        [string][parameter(Mandatory = $true)][ValidateNotNullOrEmpty()]$PackagePath,
        [Uri][parameter(Mandatory = $true)][ValidateNotNullOrEmpty()]$EndpointUrl,
        [System.Management.Automation.PSCredential]$Credential
    )
    BEGIN
    {
        if (-not (Test-Path $PackagePath))
        {
            $errorMessage = ("Package {0} missing or unable to read." -f $PackagePath)
            $exception =  New-Object System.Exception $errorMessage
			$errorRecord = New-Object System.Management.Automation.ErrorRecord $exception, 'SendPackage', ([System.Management.Automation.ErrorCategory]::InvalidArgument), $PackagePath
			$PSCmdlet.ThrowTerminatingError($errorRecord)
        }

        $EndpointUrl = Test-EndpointBaseUrl $EndpointUrl
    }
    PROCESS
    {
        $fileName = Split-Path $packagePath -leaf
        $fileName = Get-EncodedPathParts($fileName) 

        $uri = "$EndpointUrl/package/upload/$fileName"

        $response = Invoke-MultipartFormDataUpload -InFile $PackagePath -Uri $uri -Credential $credentials

        return ([xml]$response).'udm.DeploymentPackage'.id
    }
    END { }
}

You can now invoke this cmdlet and pass the requested parameters. If the call succeeds you will get back the package id.

At example:

$uri = "http://xld.majcica.com:4516"
$filePath = "C:\Users\majcicam\Desktop\package.dar"
$credentials = (Get-Credential)

$packageId = Send-Package $filePath $uri $credentials

Write-Output "Package was uploaded successfully with the following id: '$packageId'"

In case you do not want to provide credentials interactively, the following cmdlet may help you:

<############################################################################################ 
    Given the username and password strings, create a valid PSCredential object.
############################################################################################>
function New-PSCredential()
{
	[CmdletBinding()]
	param
	(
		[string][parameter(Mandatory = $true)]$Username,
		[string][parameter(Mandatory = $true)]$Password
	)
	BEGIN
	{
		Write-Verbose "Username = $Username"
	}
	PROCESS
	{
		$securePassword = ConvertTo-SecureString –String $Password -asPlainText -Force
		$credential = New-Object –TypeName System.Management.Automation.PSCredential –ArgumentList $Username, $securePassword

		return $credential
	}
	END { }
}

That’s all folks! I tested this scripts with XL Deploy 5.1.0 and 5.1.3 and had not encountered any issues. If any, do not hesitate to comment.

Happy deploying!

PowerShell tips and tricks – Multipart/form-data requests

Introduction

If you ever came across a need to invoke a request that needs to oblige to Multipart/form-data standard in PowerShell, you probably got to know quite quickly that none of commonly used cmdlets do support it.
Both, Invoke-WebRequest and Invoke-RestMethod are unaware on how to format the request body in order to comply to Multipart/form-data standard. In order to succeed in your intent, you probably Googled this out, and the best you could find are a couple of questions on StackOverflow that are showing on how to manually set the message body and then invoke your call with one of above mentioned cmdlets. I did it too. Still I was not satisfied by the answers I found. Although what proposed could be written in a slightly better way, still it was not optimal. First of all the memory footprint that it has. In case you are transmitting a large amount of data, all of the objects are loaded into memory.
After trying this approach, I dug into the .Net framework and found out that all that we need is right there. True, it is only available on .Net 4.5, and if that is not an obstacle, I would advise you to follow that approach. HttpClient and connected classes do have all of the necessary to support Multipart/form-data standard. Initially I created a prototype in a form of a small C# application and once succeed I translated all of that into PowerShell.

In this post I will show you both, PowerShell approach by formatting the request body and transmitting via Invoke-WebRequest cmdlet, as using an approach that is less resource intensive and is based on HttpClient .Net class.

Lets start.

Throw everything in

Multipart/form-data standard requires the message body to follow a certain structure. You can read more about it in the RFC 2388.
Aside the body structure, there is a concept of a boundary. Boundary is nothing else than an unique string that will be used for delimitation purposes inside the message body. It will be specified in the request header and used inside the message body to circumvent files that we intend to transmit.

As I mentioned earlier, this approach constructs the body string in a variable and once ready invokes the Invoke-WebRequest cmdlet to execute the call.

function Invoke-MultipartFormDataUpload
{
    [CmdletBinding()]
    PARAM
    (
        [string][parameter(Mandatory = $true)][ValidateNotNullOrEmpty()]$InFile,
        [string]$ContentType,
        [Uri][parameter(Mandatory = $true)][ValidateNotNullOrEmpty()]$Uri,
        [System.Management.Automation.PSCredential]$Credential
    )
    BEGIN
    {
        if (-not (Test-Path $InFile))
        {
            $errorMessage = ("File {0} missing or unable to read." -f $InFile)
            $exception =  New-Object System.Exception $errorMessage
			$errorRecord = New-Object System.Management.Automation.ErrorRecord $exception, 'MultipartFormDataUpload', ([System.Management.Automation.ErrorCategory]::InvalidArgument), $InFile
			$PSCmdlet.ThrowTerminatingError($errorRecord)
        }

        if (-not $ContentType)
        {
            Add-Type -AssemblyName System.Web

            $mimeType = [System.Web.MimeMapping]::GetMimeMapping($InFile)
            
            if ($mimeType)
            {
                $ContentType = $mimeType
            }
            else
            {
                $ContentType = "application/octet-stream"
            }
        }
    }
    PROCESS
    {
		$fileName = Split-Path $InFile -leaf
		$boundary = [guid]::NewGuid().ToString()
		
    	$fileBin = [System.IO.File]::ReadAllBytes($InFile)
	    $enc = [System.Text.Encoding]::GetEncoding("iso-8859-1")

	    $template = @'
--{0}
Content-Disposition: form-data; name="fileData"; filename="{1}"
Content-Type: {2}

{3}
--{0}--

'@

        $body = $template -f $boundary, $fileName, $ContentType, $enc.GetString($fileBin)

		try
		{
			return Invoke-WebRequest -Uri $Uri `
									 -Method Post `
									 -ContentType "multipart/form-data; boundary=$boundary" `
									 -Body $body `
									 -Credential $Credential
		}
		catch [Exception]
		{
			$PSCmdlet.ThrowTerminatingError($_)
		}
    }
    END { }
}

As you can see, we are preparing “by hand” the correct message body and executing Invoke-WebRequest in order to send our file. It is not efficient at all as it encodes the whole file as a string and keeps it in memory. Consider transmitting large files and make yourself an idea on the memory footprint of this approach. Aside that, it is just not elegant!

Wait a moment, what a waste

Previously shown method is as said, not elegant and waists resources. A better approach is to let the HttpClient class handle everything. It is way more efficient as it uses streams in order to read our file and transfers it to http StreamContent. It is less resource intensive and a more elegant solution.

Let’s see now how our refactored Invoke-MultipartFormDataUpload cmdlet looks like:

function Invoke-MultipartFormDataUpload
{
    [CmdletBinding()]
    PARAM
    (
        [string][parameter(Mandatory = $true)][ValidateNotNullOrEmpty()]$InFile,
        [string]$ContentType,
        [Uri][parameter(Mandatory = $true)][ValidateNotNullOrEmpty()]$Uri,
        [System.Management.Automation.PSCredential]$Credential
    )
    BEGIN
    {
        if (-not (Test-Path $InFile))
        {
            $errorMessage = ("File {0} missing or unable to read." -f $InFile)
            $exception =  New-Object System.Exception $errorMessage
			$errorRecord = New-Object System.Management.Automation.ErrorRecord $exception, 'MultipartFormDataUpload', ([System.Management.Automation.ErrorCategory]::InvalidArgument), $InFile
			$PSCmdlet.ThrowTerminatingError($errorRecord)
        }

        if (-not $ContentType)
        {
            Add-Type -AssemblyName System.Web

            $mimeType = [System.Web.MimeMapping]::GetMimeMapping($InFile)
            
            if ($mimeType)
            {
                $ContentType = $mimeType
            }
            else
            {
                $ContentType = "application/octet-stream"
            }
        }
    }
    PROCESS
    {
        Add-Type -AssemblyName System.Net.Http

		$httpClientHandler = New-Object System.Net.Http.HttpClientHandler

        if ($Credential)
        {
		    $networkCredential = New-Object System.Net.NetworkCredential @($Credential.UserName, $Credential.Password)
		    $httpClientHandler.Credentials = $networkCredential
        }

        $httpClient = New-Object System.Net.Http.Httpclient $httpClientHandler

        $packageFileStream = New-Object System.IO.FileStream @($InFile, [System.IO.FileMode]::Open)
        
		$contentDispositionHeaderValue = New-Object System.Net.Http.Headers.ContentDispositionHeaderValue "form-data"
	    $contentDispositionHeaderValue.Name = "fileData"
		$contentDispositionHeaderValue.FileName = (Split-Path $InFile -leaf)

        $streamContent = New-Object System.Net.Http.StreamContent $packageFileStream
        $streamContent.Headers.ContentDisposition = $contentDispositionHeaderValue
        $streamContent.Headers.ContentType = New-Object System.Net.Http.Headers.MediaTypeHeaderValue $ContentType
        
        $content = New-Object System.Net.Http.MultipartFormDataContent
        $content.Add($streamContent)

        try
        {
			$response = $httpClient.PostAsync($Uri, $content).Result

			if (!$response.IsSuccessStatusCode)
			{
				$responseBody = $response.Content.ReadAsStringAsync().Result
				$errorMessage = "Status code {0}. Reason {1}. Server reported the following message: {2}." -f $response.StatusCode, $response.ReasonPhrase, $responseBody

				throw [System.Net.Http.HttpRequestException] $errorMessage
			}

			return $response.Content.ReadAsStringAsync().Result
        }
        catch [Exception]
        {
			$PSCmdlet.ThrowTerminatingError($_)
        }
        finally
        {
            if($null -ne $httpClient)
            {
                $httpClient.Dispose()
            }

            if($null -ne $response)
            {
                $response.Dispose()
            }
        }
    }
    END { }
}

Conclusion

There is still space for improvement, however in my case it is good enough and I will not continue extending this cmdlet. If you think this is not enough or good enough, please extend it and let me know. As an idea, you may make possible passing multiple files via the pipeline and execute the upload only once, in the END step. In that way it may be more efficient and flexible. Also supporting some other common parameters as Proxy would be beneficial to others. If any let me know, I would be happy to hear from you in comments.

PowerShell tips and tricks – Retrieving TFS collections and projects

Introduction

The following post is not really about a tip or a trick regarding a PowerShell itself. It’s more about on how to leverage some TFS libraries in order to automate processes regarding TFS via a PowerShell script. It will not show any fancy PowerShell technique but a way to query a TFS server of your choice, extract the necessary information and eventually make changes.

In this first tip we will see the essential, how to connect to the TFS, retrieve all of the available collections and list all of the projects for each TPC. Imagine that you are working on a TFS instance containing multiple collections, each one again having many projects (for many I do intend hundreds of projects in total). Verifying and changing settings on each one manually will not be easy nor convenient. As there is no UI for executing bulk operations on TFS, the easiest and most logical way to interact with it is through PowerShell. Let’s see how we can achieve that.

Prerequisites

Before we start, in order to be able to use these scripts you need to have at least PowerShell v3 installed and the necessary TFS libraries registered in GAC. The easiest way to make sure that you do have just mentioned libraries installed is to make sure that you do have Visual Studio installed on the machine you are executing this script from. Also it is good that the Visual Studio you added does match in version your TFS install. It means that if you do have TFS2013 installed, the best option will be to have Visual Studio 2013. For what concerns PowerShell, you do probably already have version 3 installed on your machine. The easiest way to check is to execute the following command:

$PSVersionTable.PSVersion

If is not the case that you do have PowerShell version 3 installed, you can do so by following this link: Windows Management Framework 3.0

In order to perform the call to the TFS server you need to have sufficient rights to do so. Managing collection objects require Edit instance-level information permission level, which is granted by default only to TFS admin. In case you do not have sufficient rights, you may encounter an exception reporting
Exception calling "GetCollections" with "0" argument(s): "Access Denied: Mario Majcica needs the following permission(s) to perform this action: Edit instance-level information"

Just for a sake of completeness I will add that this is not the only way of establishing a dialog with TFS, you could extract and set some of the information also through the REST API. Also some of the objects I do treat can be retrieved in a slightly more efficient way at the expense of simplicity.
In case you are interested about just mentioned techniques you can read the following post Building a TFS 2015 PowerShell Module using Nuget. Bare in mind that it is not targeting beginners as this blog post and it requires a deeper understanding on TFS Object Model and PowerShell.

Again, some utilities as Microsoft Visual Studio Team Foundation Server 2013 Power Tools do deliver PowerShell Cmdlets that can be used to work with different features of TFS such as changesets, shelvesets, workspaces and more. If interested in details about this approach I can advise you to Google out the argument or get your hands of a book called Windows PowerShell 4.0 for .NET Developers.

Let’s script

First things first. In order to reference the necessary libraries we need to issue the following command.

Add-Type -AssemblyName "Microsoft.TeamFoundation.Client, Version=12.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"

This may seem a quite complicated way to load an assembly, unfortunately only a certain pre-defined set of assemblies can be loaded by their partial name. For all the rest a fully qualified name is necessary. In case you decide to use some other TFS functionalities, it may be necessary to reference other libraries.

Note, in case of Visual Studio 2015, the object model client libraries are removed from GAC. However the necesseray libraries are still available. In order to load them you need to approach the load in a different manner.

Add-Type -Path "C:Program Files (x86)Microsoft Visual Studio 14.0Common7IDECommonExtensionsMicrosoftTeamFoundationTeam ExplorerMicrosoft.TeamFoundation.Client.dll"

You need to point the Add-Type cmdlet to a path. It may vary based on the folder in which your Visual Studio is installed.

There is another option that may be a nice way to comply to this dependency. If you are an early adopter of PowerShell 5, you may retrieve the necessary package via OneGet.

Walking through TFS objects

All TFS libraries do have a same entry point. There are multiple factory classes, exposing static methods, that will give us the instance of classes implementing the requested interface thought whom we will perform desired operations. For a less experienced DevOps (at least less Dev’s more Ops) this may not make sense, thus let me try to explain it a bit better.
In order to use an object (a non-static class in this particular case) we need to construct an instance of it (and probably pass some parameters for its initialization). Via the factory pattern, adopted by Microsoft for this particular set of libraries, we are able to get the right instance of the class implementing the interface we are in need for. Still too complicated? Then I’m sorry, I’m already going way out of scope here.

In order to get the object through whom we will get the services we are in need for, we need to call a static method GetConfigurationServer on TfsConfigurationServerFactory class.
This call will return an instance of ConfigurationServer class which will be our main entry point for all the services.

$uri = "http://my.tfs.local:8080/tfs"
$tfsConfigurationServer = [Microsoft.TeamFoundation.Client.TfsConfigurationServerFactory]::GetConfigurationServer($uri)

As you can see I’m passing to the GetConfigurationServer method a parameter, which is the address of our TFS. It means that all of the operations performed through its service calls will be pointing to TFS indicated in this address.

Once our entry point is obtained, we can ask him to get us an instance of the class that implements the necessary logic to perform actions we are in need for.
In our case this is a class that implements ITeamProjectCollectionService interface.
We can request it with the following command

$tpcService = $tfsConfigurationServer.GetService("Microsoft.TeamFoundation.Framework.Client.ITeamProjectCollectionService")

Now that I do have a correct instance of the class that implements the interface which declares the methods I’m interested in, I will just call a method that will return a collection of TeamProjectCollection objects.
This method is called GetCollections and it is invoked as follows.

$tpcService.GetCollections() 

I’m now able to iterate through this collection and retrieve, beside others, the name of each project collection. Each object in the collection represents a project collection on our TFS. There should be always at least one element in it.

Let’s recap our script before continuing.

Add-Type -AssemblyName "Microsoft.TeamFoundation.Client, Version=12.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"

$uri = "http://tfs:8080/tfs"
$tfsConfigurationServer = [Microsoft.TeamFoundation.Client.TfsConfigurationServerFactory]::GetConfigurationServer($uri)
$tpcService = $tfsConfigurationServer.GetService("Microsoft.TeamFoundation.Framework.Client.ITeamProjectCollectionService")

$sortedCollections = $tpcService.GetCollections() | Sort-Object -Property Name

foreach($collection in $sortedCollections) {
    Write-Host $collection.Name
}

The only small detail I haven’t mentioned until now, is the fact that once I do retrieve the collections I do sort them base on the value of Name property.

Now that we have a list of all TPC’s we can construct a URL that will be used to get other services which as a starting point do require a reference to the TPC.
As just mentioned in order to construct the URL I will declare another variable and concatenate the TFS URL and the project name after which I can request an instance of TfsTeamProjectCollectionFactory class that I will use as the entry point for all the operations on the given TPC.

As already seen for the ITeamProjectCollectionService we need to obtain a service that will provide us with the necessary data. In our case this is ICommonStructureService3. It is all achieved by the following code.

Now we are able to invoke a method called ListProjects in order to get all of the projects part of that TPC.

$collectionUri = $uri + "/" + $collection.Name
$tfsTeamProject = [Microsoft.TeamFoundation.Client.TfsTeamProjectCollectionFactory]::GetTeamProjectCollection($collectionUri)
$cssService = $tfsTeamProject.GetService("Microsoft.TeamFoundation.Server.ICommonStructureService3")   
$sortedProjects = $cssService.ListProjects() | Sort-Object -Property Name

Each value in $sortedProjects will be of type ProjectInfo and within we will find all of the necessary information about that Team Project.
In between other properties and methods, as expected, we do have a property called Name. We will output the name of all the projects in that TCP.

The end result

I will add some of the formatting for our messages so that the output of the script is easier to read. Also I will collect the total number of projects. This code I do hope is not needed to be explained in detail. Following the complete script.

Add-Type -AssemblyName "Microsoft.TeamFoundation.Client, Version=12.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"

$uri = "http://tfs:8080/tfs"
$tfsConfigurationServer = [Microsoft.TeamFoundation.Client.TfsConfigurationServerFactory]::GetConfigurationServer($uri)
$tpcService = $tfsConfigurationServer.GetService("Microsoft.TeamFoundation.Framework.Client.ITeamProjectCollectionService")

$sortedCollections = $tpcService.GetCollections() | Sort-Object -Property Name
$numberOfProjects = 0

foreach($collection in $sortedCollections) {
    $collectionUri = $uri + "/" + $collection.Name
    $tfsTeamProject = [Microsoft.TeamFoundation.Client.TfsTeamProjectCollectionFactory]::GetTeamProjectCollection($collectionUri)
    $cssService = $tfsTeamProject.GetService("Microsoft.TeamFoundation.Server.ICommonStructureService3")   
    $sortedProjects = $cssService.ListProjects() | Sort-Object -Property Name

    Write-Host $collection.Name "- contains" $sortedProjects.Count "project(s)"

    foreach($project in $sortedProjects)
    {
        $numberOfProjects++
        Write-Host (" - " + $project.Name)
    }
}

Write-Host
Write-Host "Total number of project collections" $sortedCollections.Count
Write-Host "Total number of projects           " $numberOfProjects

As there are plenty of properties available, your are able to change or check a setting on all of the projects in all of the TCP’s you have. This can be very handy for the maintenance task as we are going to see later in a blog post that I’m currently working on.

Happy coding!