Managing VSTS/TFS Release Definition Variables from PowerShell

Couple of days ago I was trying to provision my Release Definitions variables in my VSTS/TFS projects via PowerShell. As this turned out not to be a trivial Web-Request and as some of the calls I discovered are not yet documented, I decided to share my findings with you.

In the following lines I’ll show you a couple of cmdlets that will allow you to manipulate all of the variables in your Release Definition, those on the definition level, environment specific ones and also variable groups.

For the purpose of adding Release definition, Environment level variables and relating Variable Groups I wrote the following cmdlet:

Don’t get scared by the number of parameters, or apparent complexity of the cmdlet. I’ll quickly explain those parameters, usage and the expected result.

Let’s start with some why’s. As you can see, in the BEGIN block of my cmdlet (which is triggered once per a pipeline invocation) I retrieve the given build definition, in the PROCESS block I add the desired variables (hopefully from the pipeline) then in the END block I persist all of the changes.

If you are unfamiliar with Windows PowerShell Cmdlet Lifecycle, please consult the following article Windows PowerShell: The Advanced Function Lifecycle.

This is intentional, as I want to have a single call to the API for all of the added variables. In this way in the history of the build definition there will be a single entry for all of the variables we added, no matter the number of them. Otherwise, we would persist the changes for each of the variables and our history would be messy.

If structured differently, we may see a history entry on per each variable that we do add. This obviously applies only if you are trying to add multiple variable in one go.

Following would be a simple invocation to add a single variable into one of our environments defined in a release template:

The above command will add a variable named Mario2 with a value 1 in the DEV environment, defined in the definition with id 23. It will also reference the variable group that has id 25.

Following would be the result:

In case you would like to add multiple variables in one go, create an array of PSCustomObject with the following properties:

This will add two variables to the environment called DEV in your Release Definition and two more variables on the Release Definition level. As you can guess, if we omit the environment name, the variables will be added on the Release Definition level. The last variable, var4, is also marked as secret, meaning that once added will not be visible to the user. Also in this case, we will have only a single entry in the change history as a single call to the REST API will be made.

Other options you can specify are:

  • Reset – By setting this switch only the variables that are not passed in the invocation, but are present on the Release definition, will be removed.
  • Comment – In case you want a custom message to be visualized in the history for this change, you can specify it here.
  • VariableGroups – An integer array indicating id’s of the variable groups you wish to link to the Release definition

In case you are using variable groups you can create those via following cmdlet:

This cmdlet will look for the given group and if it exists it will update it with the values you pass in. In case the variable group (matched by name) doesn’t exist, and if the -Force switch is selected, it will create a new group. Working principle is the same as for Add-EnvironmentVariable cmdlet. At the end, it will return the Variable Group Id that you can use later for Add-EnvironmentVariable cmdlet and reference it.

Following an example of invocation:

That’s all folks! You now have 2 new cmdlets that will allow you to automate the management of the Release Definition variables. Use these wisely 🙂

Happy coding!

P.S.
A thank you goes to Ted van Haalen who, on my input, actually wrote and tested Add-VariableGroupVariable cmdlet (as you already may have noticed because of the different coding style).

Using Windows Machine File Copy (WinRM) VSTS extension

Implementation of the original Windows Machine File Copy task is based on Net Use Command and Robocopy. This command makes use of the SMB (server message block) and the Netbios protocol on port 139 or 445. Although by default this should be always supported in Intranets, it may be that due to the network restrictions or security policies it is not possible to setup such a connection or you are running a copy on a machine that is out of your local network. Recently I faced an issue copying files with Windows Machine File Copy task due to the SMB restrictions. This pushed me to recreate the same task as the original Windows Machine File Copy task, however with the transfer based on WinRM protocols. I shared my work in a form of an extension on Visual Studio Team Services – Visual Studio Marketplace. You can find my extension here WinRm File Copy.

Sources are available on GitHub in the repository called mmajcica/win-rm-file-copy, meanwhile the original implementation is part of the Microsoft/vsts-tasks repository.

In this post I will not go into the implementation details, just illustrate the usage of the task itself.

Usage wise, there are no differences with the original Microsoft task and this was also my main goal. Here is a screenshot of the task:

As you can see, all of the parameters are almost the same as for the original task.

Requirements wise, PowerShell V5 is required both on the build server as on the destination machine. And that is the only requirement, given for granted that WinRM is correctly setup.

Let’s quickly see how to set up a file copy. As for the Microsoft task, you need to specify the following parameters:

  • Source: The source of the files. As described above using pre-defined system variables like $(Build.Repository.LocalPath) make it easy to specify the location of the build on the Build Automation Agent machine. The variables resolve to the working folder on the agent machine, when the task is run on it. Wild cards like **/*.zip are not supported. Probably you are going to copy something from your artifacts folder that was generated in previous steps of your build/release, at example $(System.ArtifactsDirectory)\Something
  • Machines: Specify comma separated list of machine FQDNs/ip addresses along with port(optional). For example dbserver.fabrikam.com, dbserver_int.fabrikam.com:5988,192.168.34:5989.
  • Admin Login: Domain/Local administrator of the target host. Format: \ < Admin User>.
  • Password: Password for the admin login. It can accept variable defined in Build/Release definitions as ‘$(passwordVariable)’. You may mark variable type as ‘secret’ to secure it.
  • Destination Folder: The folder in the Windows machines where the files will be copied to. An example of the destination folder is C:\FabrikamFibre\Web.
  • Use SSL: In case you are using secure WinRM, HTTPS for transport, this is the setting you will need to flag.
  • Clean Target: Checking this option will clean the destination folder prior to copying the files to it.
  • Copy Files in Parallel: Checking this option will copy files to all the target machines in parallel, which can speed up the copying process.

There is not much more to say. If you need to copy a file or a folder, from your build agent, in a target folder on a remote machine, using WinRm as a transfer media, this is the way to go.

Happy coping!

Chrome’s badidea

If case you misunderstood the title, NO, Chrome is not a bad idea. It is my browser of choice for almost 10 years. It’s a great choice. However, because of the security restrictions, you may bump in pages like this:

In case of invalid certificate or some other similar issues, your browser will refuse to load a page of your choice. This is often a smart choice, however if you really know what you are doing, there is an easy way to bypass it. Till now in certain cases, you could expand the details view and continue on the site. In certain scenarios and for certain versions that is not possible anymore. If you google this error message, you will find a bunch of suggestion that may or may not work, although I find all of them cumbersome.
There is however an easy trick and I’ll write it down here as I continuously forget it and I need to poke a friend of mine, Damir Varga, who initially introduced me to it (and apparently has a better memory than I do).

Now back to the trick.

In case you are presented with the above situation, click anywhere in the page and type on your keyboard ‘badidea‘. That’s all, that simple. Your page will now load.

Top trick!

Happy browsing!