Uploading artifacts into Nexus via PowerShell

It may not be the most common thing, however it may happen that you would need to upload an artifact to a maven repository in Nexus via PowerShell. In order to achieve that, we will use Nexus REST API which for this task requires a multipart/form-data POST to /service/local/artifact/maven/content resource. This is however not a trivial task. It is notoriously difficult to manage a Multipart/form-data standard in PowerShell, as I already described in one of my previous post PowerShell tips and tricks – Multipart/form-data requests. As you can see this type of call is necessary in order to upload an artifact to your Nexus server. I will not get in a details about Multipart/form-data requests and if interested about the details, you can check just mentioned post.

For this purpose I created two cmdlets that will allow me to upload an artifact by supplying GAV parameters or by passing the GAV definition in a POM file. A couple of simple functions do support both of these cmdlets.

GAV definition in a POM file

As the heading suggests, this cmdlet will let you upload your artifact and specify the GAV parameters via a POM file.

function Import-ArtifactPOM()
{
	[CmdletBinding()]
	param
	(
		[string][parameter(Mandatory = $true)][ValidateNotNullOrEmpty()]$EndpointUrl,
		[string][parameter(Mandatory = $true)][ValidateNotNullOrEmpty()]$Repository,
		[string][parameter(Mandatory = $true)][ValidateNotNullOrEmpty()]$PomFilePath,
		[string][parameter(Mandatory = $true)][ValidateNotNullOrEmpty()]$PackagePath,
		[System.Management.Automation.PSCredential][parameter(Mandatory = $true)]$Credential
	)
	BEGIN
	{
		if (-not (Test-Path $PackagePath))
		{
			$errorMessage = ("Package file {0} missing or unable to read." -f $PackagePath)
			$exception =  New-Object System.Exception $errorMessage
			$errorRecord = New-Object System.Management.Automation.ErrorRecord $exception, 'ArtifactUpload', ([System.Management.Automation.ErrorCategory]::InvalidArgument), $PackagePath
			$PSCmdlet.ThrowTerminatingError($errorRecord)
		}
		
		if (-not (Test-Path $PomFilePath))
		{
			$errorMessage = ("POM file {0} missing or unable to read." -f $PomFilePath)
			$exception =  New-Object System.Exception $errorMessage
			$errorRecord = New-Object System.Management.Automation.ErrorRecord $exception, 'ArtifactUpload', ([System.Management.Automation.ErrorCategory]::InvalidArgument), $PomFilePath
			$PSCmdlet.ThrowTerminatingError($errorRecord)
		}

		Add-Type -AssemblyName System.Net.Http
	}
	PROCESS
	{
		$repoContent = CreateStringContent "r" $Repository
		$groupContent = CreateStringContent "hasPom" "true"
		$pomContent = CreateStringContent "file" "$(Get-Content $PomFilePath)" ([system.IO.Path]::GetFileName($PomFilePath)) "text/xml"
		$streamContent = CreateStreamContent $PackagePath

		$content = New-Object -TypeName System.Net.Http.MultipartFormDataContent
		$content.Add($repoContent)
		$content.Add($groupContent)
		$content.Add($pomContent)
		$content.Add($streamContent)

		$httpClientHandler = GetHttpClientHandler $Credential

		return PostArtifact $EndpointUrl $httpClientHandler $content
	}
	END { }
}

In order to invoke this cmdlet you will need to supply the following parameters:

  • EndpointUrl – Address of your Nexus server
  • Repository – Name of your repository in Nexus
  • PomFilePath – Full, literal path pointing to your POM file
  • PackagePath – Full, literal path pointing to your Artifact
  • Credential – Credentials in the form of PSCredential object

I will create a POM file with the following content:

<project>
  <modelVersion>4.0.0</modelVersion>
  <groupId>com.mycompany.app</groupId>
  <artifactId>my-app</artifactId>
  <version>3</version>
</project>

And invoke my cmdlet in the following way:

$server = "http://nexus.maio.com"
$repoName = "maven"
$pomFile = "C:\Users\majcicam\Desktop\pom.xml"
$package = "C:\Users\majcicam\Desktop\junit-4.12.jar"
$credential = Get-Credential

Import-ArtifactPOM $server $repoName $pomFile $package $credential

If everything is correctly setup, you will be first asked to provide the credentials, then the upload will start. If it succeeds, you will receive back a string like this:

{"groupId":"com.mycompany.app","artifactId":"my-app","version":"3","packaging":"jar"}

It is a json representation of the information about imported package.

Manually supplying GAV parameters

The other cmdlet is based on a similar principle, however it doesn’t require a POM file to be passed in, instead it let you provide GAV values as parameter to the call.

function Import-ArtifactGAV()
{
	[CmdletBinding()]
	param
	(
		[string][parameter(Mandatory = $true)][ValidateNotNullOrEmpty()]$EndpointUrl,
		[string][parameter(Mandatory = $true)][ValidateNotNullOrEmpty()]$Repository,
		[string][parameter(Mandatory = $true)][ValidateNotNullOrEmpty()]$Group,
		[string][parameter(Mandatory = $true)][ValidateNotNullOrEmpty()]$Artifact,
		[string][parameter(Mandatory = $true)][ValidateNotNullOrEmpty()]$Version,
		[string][parameter(Mandatory = $true)][ValidateNotNullOrEmpty()]$Packaging,
		[string][parameter(Mandatory = $true)][ValidateNotNullOrEmpty()]$PackagePath,
		[System.Management.Automation.PSCredential][parameter(Mandatory = $true)]$Credential
	)
	BEGIN
	{
		if (-not (Test-Path $PackagePath))
		{
			$errorMessage = ("Package file {0} missing or unable to read." -f $packagePath)
			$exception =  New-Object System.Exception $errorMessage
			$errorRecord = New-Object System.Management.Automation.ErrorRecord $exception, 'XLDPkgUpload', ([System.Management.Automation.ErrorCategory]::InvalidArgument), $packagePath
			$PSCmdlet.ThrowTerminatingError($errorRecord)
		}

		Add-Type -AssemblyName System.Net.Http
	}
	PROCESS
	{
		$repoContent = CreateStringContent "r" $Repository
		$groupContent = CreateStringContent "g" $Group
		$artifactContent = CreateStringContent "a" $Artifact
		$versionContent = CreateStringContent "v" $Version
		$packagingContent = CreateStringContent "p" $Packaging
		$streamContent = CreateStreamContent $PackagePath

		$content = New-Object -TypeName System.Net.Http.MultipartFormDataContent
		$content.Add($repoContent)
		$content.Add($groupContent)
		$content.Add($artifactContent)
		$content.Add($versionContent)
		$content.Add($packagingContent)
		$content.Add($streamContent)

		$httpClientHandler = GetHttpClientHandler $Credential

		return PostArtifact $EndpointUrl $httpClientHandler $content
	}
	END { }
}

In order to invoke this cmdlet you will need to supply the following parameters:

  • EndpointUrl – Address of your Nexus server
  • Repository – Name of your repository in Nexus
  • Group – Group Id
  • Artifact – Maven artifact Id
  • Version – Artifact version
  • Packaging – Packaging type (at ex. jar, war, ear, rar, etc.)
  • PackagePath – Full, literal path pointing to your Artifact
  • Credential – Credentials in the form of PSCredential object

An example of invocation:

$server = "http://nexus.maio.com"
$repoName = "maven"
$group = "com.test"
$artifact = "project"
$version = "2.4"
$packaging = "jar"
$package = "C:\Users\majcicam\Desktop\junit-4.12.jar"
$credential = Get-Credential

Import-ArtifactGAV $server $repoName $group $artifact $version $packaging $package $credential

If all goes as expected you will again receive a response confirming the imported values.

{"groupId":"com.test","artifactId":"project","version":"2.4","packaging":"jar"}

Supporting functions

As you could see, my cmdlets relay on a couple of functions. They are essential in this process so I will analyse them one by one.

function CreateStringContent()
{
	param
	(
		[string][parameter(Mandatory = $true)][ValidateNotNullOrEmpty()]$Name,
		[string][parameter(Mandatory = $true)][ValidateNotNullOrEmpty()]$Value,
		[string]$FileName,
		[string]$MediaTypeHeaderValue
	)
		$contentDispositionHeaderValue = New-Object -TypeName  System.Net.Http.Headers.ContentDispositionHeaderValue -ArgumentList @("form-data")
		$contentDispositionHeaderValue.Name = $Name

		if ($FileName)
		{
			$contentDispositionHeaderValue.FileName = $FileName
		}
		
		$content = New-Object -TypeName System.Net.Http.StringContent -ArgumentList @($Value)
		$content.Headers.ContentDisposition = $contentDispositionHeaderValue

		if ($MediaTypeHeaderValue)
		{
			$content.Headers.ContentType = New-Object -TypeName System.Net.Http.Headers.MediaTypeHeaderValue $MediaTypeHeaderValue
		}

		return $content
}

function CreateStreamContent()
{
	param
	(
		[string][parameter(Mandatory = $true)][ValidateNotNullOrEmpty()]$PackagePath
	)
		$packageFileStream = New-Object -TypeName System.IO.FileStream -ArgumentList @($PackagePath, [System.IO.FileMode]::Open)
		
		$contentDispositionHeaderValue = New-Object -TypeName  System.Net.Http.Headers.ContentDispositionHeaderValue "form-data"
		$contentDispositionHeaderValue.Name = "file"
		$contentDispositionHeaderValue.FileName = Split-Path $packagePath -leaf

		$streamContent = New-Object -TypeName System.Net.Http.StreamContent $packageFileStream
		$streamContent.Headers.ContentDisposition = $contentDispositionHeaderValue
		$streamContent.Headers.ContentType = New-Object -TypeName System.Net.Http.Headers.MediaTypeHeaderValue "application/octet-stream"

		return $streamContent
}

First two functions are there to create a correct HTTP content. Each content object we create will correspond to a content-disposition header. The first function will return a string type values for the just mentioned header, meanwhile the Stream content will return a stream that will be consumed later on, having as a value octet-stream representation of our artifact.

GetHttpClientHandler function is a helper that will create the right http client handler that contains the credentials to be used for our call.

function GetHttpClientHandler()
{
	param
	(
		[System.Management.Automation.PSCredential][parameter(Mandatory = $true)]$Credential
	)

	$networkCredential = New-Object -TypeName System.Net.NetworkCredential -ArgumentList @($Credential.UserName, $Credential.Password)
	$httpClientHandler = New-Object -TypeName System.Net.Http.HttpClientHandler
	$httpClientHandler.Credentials = $networkCredential

	return $httpClientHandler
}

Last but not least, a function that actually invokes the post call to our server.

function PostArtifact()
{
	param
	(
		[string][parameter(Mandatory = $true)]$EndpointUrl,
		[System.Net.Http.HttpClientHandler][parameter(Mandatory = $true)]$Handler,
		[System.Net.Http.HttpContent][parameter(Mandatory = $true)]$Content
	)

	$httpClient = New-Object -TypeName System.Net.Http.Httpclient $Handler

	try
	{
		$response = $httpClient.PostAsync("$EndpointUrl/service/local/artifact/maven/content", $Content).Result

		if (!$response.IsSuccessStatusCode)
		{
			$responseBody = $response.Content.ReadAsStringAsync().Result
			$errorMessage = "Status code {0}. Reason {1}. Server reported the following message: {2}." -f $response.StatusCode, $response.ReasonPhrase, $responseBody

			throw [System.Net.Http.HttpRequestException] $errorMessage
		}

		return $response.Content.ReadAsStringAsync().Result
	}
	catch [Exception]
	{
		$PSCmdlet.ThrowTerminatingError($_)
	}
	finally
	{
		if($null -ne $httpClient)
		{
			$httpClient.Dispose()
		}

		if($null -ne $response)
		{
			$response.Dispose()
		}
	}
}

This is all the necessary to upload an artifact to our Nexus server. You can find the just show scripts also on GitHub in the following repository tfs-build-tasks. Further information about programtically uploading artifacts into Nexus can be found in the following post, How can I programatically upload an artifact into Nexus?.

Securing Nexus Repository Manager OSS

Recently one of my articles got published on SonaType web site. It is based on my blog post Nexus Repository Manager OSS as Nuget server, which, after reviewing it for SonaType blog, somehow didn’t seemed complete. Thus, I decided to add an extra chapter about the security which I would like to share with you, also on my personal blog, here.

Security

I suppose that your computers are part of an Active Directory domain. I will propose here a setup to make your new Nexus instance dialogue with your domain controller when it comes to authenticating users. Why? Well, uploading packages to the hosted repository requires an API-key and it is assigned on per user basis. If you wish that your users (services) to be distinguished and limit some of them, you would need to setup the authentication and roles correctly. You could do this by simply using Nexus internal database, however, as we all know, adding users manually and creating another identity, from convenience and maintenance prospective is not recommendable. Not to speak about the security implications.

Setting up LDAP connection

In the left-hand main menu panel expand the security group and select the LDAP Configuration option. A new tab will open and you will be presented with a long list of parameters:

ldap-configuration

We are going to analyze them one by one and explain the values you do need to enter in order to make this connection work.

Protocol
Here you have two choices Lightweight Directory Access Protocol and the Lightweight Directory Access Protocol over SSL. Probably you are going to use just a simple LDAP protocol and not the

Hostname
The hostname or IP address of the LDAP server. Simply add the FQDN towards your domain controller.

Port
The port on which the LDAP server is listening. Port 389 is the default port for the ldap protocol, and port 636 is the default port for the ldaps.
If you have a multi domain, distributed Active Directory forest, you should connect to the Active Directory through port 3268. Connecting directly to port 389 might lead to errors. Port 3268 exposes Global Catalog Server that exposes the distributed data. The SSL equivalent connection port is 3269.

Search Base
The search base is the Distinguished Name (DN) to be appended to the LDAP query. The search base usually corresponds to the domain name of an organization. For example, the search base on my test domain server could be dc=maio,dc=local.

Authentication Method
Simple authentication is what we are going to use in first instance for our Active Directory authentication. It is not recommended for production deployments not using the secure ldaps protocol as it sends a clear-text password over the network. You can change

Username
For the LDAP service to establish the communication with the AD server, a valid account is necessary. Usually you will create a service account in your AD for this purpose only.

Password
Password for the above mentioned account.

This is the first part of configuration. Now we need to check if what we entered is valid and if the connection can be established.
Click on Check Authentication button and if the configuration is valid, you should get the following message:

authentication-test

Base DN
You should indicate in which organizational unit Nexus should search for user accounts. By default it can equal to ‘cn=users’ however if you store your user accounts in a different OU, please specify the full path. An example, if you created an OU at the root level called MyBusiness then an OU called Users, you will need to specify the following value as base DN, ‘OU=Users,OU=MyBusiness’.

User Subtree
This needs to be check only if you are having other OU’s, under the one specified in Base DN, in which users are organized. If that is not the case, you can leave it unchecked.

Object Class
For AD it needs to be set yo ‘user’.

User ID Attribute
In the MS AD implementation this value equals to sAMAccountName

Real Name Attribute
You can set this to ‘givenName’ attribute, or simply to cn if givenName is not used during the user creation.

E-Mail Attribute
This attribute should map to ‘mail’

Group Type
Group Type needs to be set to Dynamic Groups.

Member Of Attribute
In case of MS AD this equals to ‘memberOf’.

Now that we have set all of the relevant values, we can ask Nexus OSS to give us a preview of retrieved accounts and mappings to make sure everything is set up correctly. Click on the Check User Mapping button and you should see a preview of the retrieved values and how they are mapped into Nexus OSS. If everything looks fine , save your configuration.

Once the configuration is persisted, the last thing to do is to let Nexus know that you would like to use this new setup for the authentication. This is done by enabling the LDAP Authentication Realm.

In the left-hand main menu panel, from the administration group, choose Server. In the nexus tab that opens, go to the security settings

selected-realms

You will find the OSS LDAP Authentication Realm in the group shown on the right side, called Available Realms. Move it to Selected realms (as shown in the picture above) and save this settings.

Configuring roles

Now that we are able to authenticate through LDAP (AD) we need to set who can do what. This is when the roles do get handy. I usually create security groups and then associate them to roles as this moves the operations of granting the access rights to merely assigning a user to a group.
You can still associate roles to the user in more or less the same way, however I’m going to show you my favorite approach.

From the left-hand main menu panel, in the Security group, select Roles. A new tab showing a list of roles will appear.

external-role-mapping

Choose Add and then External Role Mapping option. You will then be presented with the following dialog:

map-external-role

As you can see from this screenshot, choose LDAP as Realm and the role (an AD security group to be precise) you wish to map. Be aware that in some cases, not all of the groups will be listed. It is a well know bug. If this is your case, there is a workaround. Add a Nexus Role instead of the External Role Mapping and name the role precisely as your missing group is named. This should do the trick until this issue gets solved.

Once the mapping is added, you will need to assign roles (actual rights) to selected security group.

external-role-mapping

On the Role/Privilege Management bar, choose Add button and select the following roles:

  • Nexus API-Key Access
  • Nexus Deployment Role
  • Repo: All NuGet Repositories (Full Control)
  • Artifact Upload

These set of rights will enable whoever part of that group to login in Nexus OSS, manually upload packages, get the API-Key, and generally to control all NuGet repositories. In other words, a sort of developers role when it comes to NuGet.

When it comes to administrator role, it is sufficient to assign only the Nexus Administrator Role.
At the end of selecting roles, do not forget to click on Save button and persist this configuration.

This two basic profiles should be sufficient to start using Nexus. If further profiles are necessary, check the managing roles guide.

You can now test all of our security setup. Make sure you AD user account is part of the administrative security group that you just mapped to Nexus Administrator Role. Log out and log back in with your domain account. If you succeed to login, congratulations LDAP is set correctly.

XL Deploy and TFS 2015 – Building, Importing and Deploying Packages

Recently XebiaLabs released a build task for Microsoft Visual Studio Team Services (VSTS) and Team Foundation Server (TFS) 2015 that facilitates the integration of the vNext build pipeline with their product, XL Deploy. The new build task is capable of creating a deployment archive (DAR package), importing it to an instance of XL Deploy, and eventually, if requested, triggering the deployment of the newly added package. It also contains some other interesting options that will ease the process of versioning the deployment package.

In the following post I will show you how to build an ASP.NET MVC application, create the deployment package, and deploy it via XL Deploy. Let’s start.

Prerequisites

Before we start, make sure that you have all of the necessary items in place. First of all, your build agent needs to be capable of building your application. In the case of on-premises TFS 2015, this means that you need to have installed Visual Studio on your build server.

You also need to install the XL Deploy build task; if you haven’t done so already, refer to Install the XL Deploy build task. As a prerequisite for the XL Deploy build task, your XL Deploy instance needs to be of version 5.0.0 or later; earlier versions are not supported.

Once you ensure that these criteria are met, you can continue reading.

Building your web application

First things first. Create a new empty build definition in your project. If you are not familiar with how to do this, refer to the MSDN article Create a build definition.

Set the repository mappings accordingly before proceeding any further; again, refer to the MSDN article Specify the repository for more information about doing so. For this example I am going to use TFVC; however, all of the techniques I am going to describe do work also with Git repository.

Now you can specify the necessary build steps to compile and publish the web application.

Add a Visual Studio Build build task.

add-task-vsb

Once added, you need to set the necessary parameters. The first mandatory setting is the path to the solution file in source control. Click the button with three dots and locate your solution file in the version control tree. Once found and confirmed, the Solution box will be populated with the correct path. In my case, that is $/XL Deploy Showcase/AwesomeWebApp/AwesomeWebApp.sln. This is the solution that will be built in the build definition.

Next you need to use the MSBuild Arguments setting to indicate that all of the built items should be delivered in the build staging directory. This is an important step because when the XL Deploy build task starts creating the deployment archive (DAR package), it will search for items to include in it, starting from this directory (considering the build staging directory as root). That’s why I am going to set MSBuild Arguments to /p:OutDir=$(build.stagingDirectory). This is a common way for MSBuild to pass a parameter; I’m using a variable that will resolve in a specific path when the build is run. A list of available variables and their usage is explained in the MSDN article Use variables. For now, I will not change any other setting. Make sure that Restore NuGet Packages is selected and that the correct version of Visual Studio is listed.

vsbuild-task

Save the build definition. Enter a relevant name and optionally set the comment.

save-definition

Now, you can test the build. Queue a new build and check the results. If everything went well, you will get a ‘green’ build.

first-build

Now you are sure that the project builds correctly and we can continue with our next task, which is deploying the application via XL Deploy.

You can find more information about build steps on MSDN at Specify your build steps.

XL Deploy build task

In order to continue, you need to have a valid XL Deploy manifest file checked in to our source control. You can read about creating a manifest file at XL Deploy manifest format. I checked in the following manifest:

<?xml version="1.0" encoding="utf-8"?>
<udm.DeploymentPackage version="1.0.0.0" application="AwesomeWebApp">
  <deployables>
	<iis.WebContent name="website-files" file="_PublishedWebsites\AwesomeWebApp">
	  <targetPath>C:\inetpub\AwesomeWebApp</targetPath>
	</iis.WebContent>
	<iis.ApplicationPoolSpec name="AwesomeWebApp">
	  <managedRuntimeVersion>v4.0</managedRuntimeVersion>
	</iis.ApplicationPoolSpec>
	<iis.WebsiteSpec name="AwesomeWebApp-website">
	  <websiteName>AwesomeWebApp</websiteName>
	  <physicalPath>C:\inetpub\AwesomeWebApp</physicalPath>
	  <applicationPoolName>AwesomeWebApp</applicationPoolName>
	  <bindings>
	<iis.WebsiteBindingSpec name="AwesomeWebApp/8088">
	  <port>8088</port>
	</iis.WebsiteBindingSpec>
	  </bindings>
	</iis.WebsiteSpec>
  </deployables>
</udm.DeploymentPackage>

As you can see, I specified the required IIS application pool, a web site, and, most importantly, WebContent. The WebContent element indicates that the files that do represent the web site.

An attribute called file is present on the deployable element iis.WebContent. This attribute is very important, as it will guide the build task in creating the deployment archive. A relative path is set as the value and indicates the folder that needs to be included in the DAR package. This relative path assumes that the root folder is the build staging directory; it will search this directory for the folders specified here. If they are not present, packaging will fail. Now you understand why I delivered the build output in the staging folder. Although this behavior can be overridden in the advanced settings of the XL Deploy build task, I would advise you not to change it if it is not necessary and until you became confident and sufficiently experienced with TFS 2015 builds.

The manifest file also indicates that I expect MSBuild to package the web application under a default folder called _PublishedWebsites. This is standard behavior for MSBuild.

After the manifest file is correctly checked in, you can continue and edit the recently created build definition by adding another build step. This time, go to the Deploy group and select XL Deploy.

add-task-xld

The first mandatory parameter that you need to set is the path to the manifest file. As you did previously for the solution file, click the button with three dots and select the manifest file in source control. In my case, that will be $/XL Deploy Showcase/AwesomeWebApp/deployit-manifest.xml.

Secondly, you need to set the endpoint, which is a definition of the URL and credentials for the XL Deploy server that you intend to use in this deployment. If no XL Deploy endpoints are defined, click Manage and define one. For information about doing so, refer to Add an endpoint in Team Foundation Server 2015. After you are done, click Refresh and your newly added endpoint will appear.

Versioning the package

As you can see, there is a Version option available on the XL Deploy build task. If selected, the deployment package version will be set to the build number format during the process. The build number format is a value that you can set in the General tab of the build definition.

When composing the build number, you can choose between constants and special tokens that are available for that purpose. In Specify general build definition settings, you can read about the available tokens and ways of setting the build number format.

If you are using a different pattern for versioning your deployment package and your build number format has a different use, you can override this behavior by setting a custom value in Version value override parameter under Advanced options.

For this example, I will set this option and then set the build number format to a simple ‘1.1.1$(Rev:.r)’.

Automated deployment

You have the option to automatically deploy the newly imported package to the environment of your choice. If you select the Deploy check-box on the XL Deploy build task, a new field called Target Environment will be available. Here, you need to specify the fully qualified name of the environment defined in XL Deploy. In my case, it is called Test. You can also allow the deployment to be rolled back in case of failure by selecting Rollback on deployment failure.

Note that this requires that you have correctly defined the environment in XL Deploy, with the relevant infrastructure and components. For more information, refer to Create an environment in XL Deploy.

Ready to roll

After you set all of the parameters and save the build definition, you are ready to trigger the new build. Make sure that you set all of the necessary parameters as shown in the picture.

xld-task

You can now trigger the new build. In the build console you should now see the information relevant to the process of creating and versioning the package.

final-build

In case of problems, you can add a new build definition variable named system.debug and set the value to true. Then, in the build log, you will see more detailed messages about all of the actions preformed by the tasks (note that this is not shown in the console). It should be sufficient to diagnose the issue.

Conclusions

This was a small practical example about taking advantage of the new XL Deploy build task. I haven’t covered all of the available options; you can read more about them at Introduction to the Team Foundation Server 2015 plugin. However, this example is sufficient to get started. You can combine other build tasks in your build definition with the XL Deploy build task; but it is important that you invoke the XL Deploy build task after all of the elements that will compose your deployment package are available.