By Adam Bertram
Building an automated Azure DevOps pipeline can be hard enough without having to remember simple strings and numbers to be used throughout a pipeline.
Using variables in Azure Pipelines, you can define a string or number once and reference it throughout the pipeline.
Just like variables in programming languages, pipeline variables organize elements and allow a developer to define a variable once and reference that variable over and over again.
One type of variable you’ll come across in a pipeline is an output variable. An output variable is a specific kind of pipeline variable that is created by a task. The deployment task assigns a value to a variable which then makes it available to use across a pipeline.
In this article, you’re going to learn how to create output variables and how to reference them across jobs, stages and even entire pipelines!
All examples in this article will be using the Azure DevOps YAML multi-stage user experience. Be sure you have this feature enabled.
Whenever you begin learning about pipeline variables, you’ll probably come across general pipeline variables. These variables are defined via the
variables section in a pipeline or by a script.
For example, below you can see how to create a variable called
blog and assign it the value of
variables: - name: blog value: "nigelfrank"
Once defined, the pipeline variable can be referenced in other places inside the pipeline.
Output variables essentially are pipeline variables. They are created by the pipeline and are referenced by other tasks in the pipeline. The big difference is how they are created.
Unlike a general pipeline variable, an output variable is defined and value generated by the output of a task. Output variables are dynamic and represent the result of a particular task. They are not statically defined as above. You will never know an output variable’s value until a task in the pipeline runs.
Get the latest hand-picked Azure roles via our jobs by email service.
There are two different ways to create output variables – by building support for the variable in the task itself or setting the value ad-hoc in a script.
One way to define an output variable is via a task. Output variable support is built into a task. Since this article is not going to cover how to build custom tasks, we’ll use an existing task with output variable support already added.
One common task that has an option to create an output variable is the Azure Resource Management Template Deployment task. This task has an attribute called
deploymentOutputs that allows you to define the output variable returned. The value of
deploymentOutputs, in this case, will be the JSON returned from the ARM deployment.
The YAML task below creates an output variable called
armDeployment. That variable’s value can now be referenced within the pipeline.
- task: AzureResourceManagerTemplateDeployment@3 inputs: [other attributes here] deploymentOutputs: armDeployment
Another way to create output variables is via a script. You can run both PowerShell and Bash scripts in tasks. Using a concept called logging commands, you can define output variables within scripts.
Logging commands can create general pipeline and output variables. The syntax is similar.
Below you can see an example of setting a variable called
foo to a value of
bar in a PowerShell task.
powershell: | Write-Host '##vso[task.setvariable variable=foo]bar'
To make this variable an output variable, add the string
;isOutput=true to the end like below.
powershell: | Write-Host '##vso[task.setvariable variable=foo;isOutput=true]bar'
Once the output variables are defined, you can then reference them just like any other pipeline variable. One of the most common methods is using macro syntax. To reference the
foo variable defined above, for example, you could use
$(foo) or in the form of
[originating_task name]_[variable name].
You can reference the output variable anywhere in the same or child scope it was defined. Perhaps you have a task that creates an output variable called out. The below example is defining a task with the name
SomeTask that natively creates an output variable called out.
In a task within that same job, you can reference that variable using
steps: - task: MyTask@1 name: SomeTask - script: echo $(SomeTask.out)
If you need to reference an output variable defined in a different scope, you can do so but the task will be a bit more complicated.
You cannot natively reference output variables across jobs, however it is possible. To do this, you have to rely on task dependencies. A task dependency assigns a certain task as being required to run before the task that the dependency is assigned to.
To demonstrate, perhaps you have a pipeline with two jobs called
IIS. In the
Storage job, you have a script task called
Important: You must assign a name to the task that is assigning the output variable!
Inside of the script task, it creates an output variable called
psJobVariable as shown below.
- job: Storage - task: PowerShell@2 name: "psJob" inputs: targetType: 'inline' script: | echo "##vso[task.setvariable variable=psJobVariable;isOutput=true]some_value"
You then have another job called IIS where you’d like to reference the
psJobVariable value from. To do that, you must first set the dependsOn attribute at the job level pointing to the
Storage job. You should then define a separate variable in the
IIS job with a value referencing the other job variable in the form
$[ dependencies.<job_name>.outputs['<task_name>.<variable_name>'] ].
Once the variable is defined in the other job, you can then use it just like any other pipeline variable.
- job: IIS dependsOn: - Storage variables: var: $[ dependencies.Storage.outputs['psJob.psJobVariable'] ] powershell: | Write-Host "$env:var"
Although it’s not possible to share variables across stages, there is a workaround. You can make it happen by saving the variable’s value to disk, publishing the file artifact to the pipeline and then reading the file in the other stage.
Perhaps you have a pipeline with two stages;
secondstage with a single job in each. In the first stage below, you’d create a temporary folder on the pipeline agent. In this example, the folder is called
variables. Once the folder is created, you’d then create a text file inside of that folder containing the value of the variable.
Once the file with the variable value exists, you’d then use the Publish Build Artifacts Task to send all files in the variables folder to the pipeline assigning it the name
stages: - stage: firststage jobs: - job: firstjob steps: - bash: | FOO="some value" mkdir -p $(Pipeline.Workspace)/variables echo "$FOO" > $(Pipeline.Workspace)/variables/FOO - publish: $(Pipeline.Workspace)/variables artifact: variables
In the second stage, you’d then download the build artifacts, read the variable from the saved file and assign it to another variable. You can see a great example of this below.
- stage: secondstage jobs: - job: secondjo steps: - download: current artifact: variables - bash: | FOO=$(cat $(Pipeline.Workspace)/variables/FOO) echo "##vso[task.setvariable variable=FOO]$FOO" - bash: | echo "$(FOO)"
If you have multiple pipelines and need to share variables across them, you can do so using variable groups. Variable groups are a handy feature allowing you to share variables across pipeline and link secrets to Azure key vaults.
Once you’ve created a variable group, you can then reference it within all of your pipelines by specifying the group attribute under
variables like below.
variables: - group: myVarGroup
Once the variable group is referenced in the pipeline, you can expand the variables just as if you had defined the variables within the pipeline itself. To expand variable from variable groups, use the same technique you learned in the Expanding Output Variables section.
If you need to pass simple strings and integers from one task to another, output variables are the way to go. Supported in many built-in tasks and available in custom tasks, output variables help a pipeline author pass information across jobs, stages and even pipelines.
No Azure pipeline is the same. Each pipeline has unique requirements and design is an important factor. Knowing how output variables work and what they’re capable of provides you with one more trick up your sleeve to build a manageable and efficient pipeline.
Adam Bertram is a 20-year veteran of IT and an experienced online business professional. He’s a consultant, Microsoft MVP, blogger, trainer, author and content marketing writer for multiple technology companies. Catch up on Adam’s articles at adamtheautomator.com, connect on LinkedIn, or follow him on Twitter at @adbertram.
Registrieren Sie sich und erhalten Sie Microsoft Azure and Dynamics-Tipps