Azure DevOps for the Ops consultant – Part 5

Hi all,
In this part of the series, I want to focus on the build pipeline. In short, the build pipeline will provide the rest of your pipelines with artifacts. Artifacts can be files, folders, or compiled code, and in my case here, it will be PowerShell files. As you will see in the series, I also use Pester tests to verify my code lives up to the demands I have.

Let’s begin.

First, I will click on “Pipelines”, this will automatically take me to the build pipeline section that I want to use. I click on “Create Pipeline” to proceed.

As you can see below, there are options for the code for the pipeline stored in several different places. I have my code stored in Azure Repos Git, so I will choose this to proceed.


I will select my repo to proceed.


Now I need to create my YAML file, which controls my build pipeline. Below you can see the code I am using, and this is also available in the code snippet below the picture.


# Starter pipeline

# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:

– master

vmImage: ‘windows-2019’

– task: PowerShell@2
filePath: ‘BuildPipeline.ps1’
pwsh: false

– task: CopyFiles@2
displayName: ‘Copy files’
TargetFolder: $(Build.ArtifactStagingDirectory)

– task: PublishPipelineArtifact@1
displayName: ‘Publish Pipeline Artifact’

– task: PublishTestResults@2
testResultsFormat: ‘NUnit’
testResultsFiles: ‘**/TEST-*.xml’
failTaskOnFailedTests: true

When the pipeline code is complete, and I can click on “Run” to save and execute the pipeline.

When the pipeline has run for the first time, you should see a screen as illustrated below. You can see that my pipeline ran with success and that it took 54 seconds to run. You can also see that 100% of the tests were successful.


I will click where it says 100% to see the actual tests that ran. As you can see below, it went through 48 tests and passed them all. You can see that I am testing modules and config files for different things, among those is that my module exists and that the JSON template files contain the code I expect them to.


The testing with Pester is not mandatory, but I think it is something that you should take a look at. If you start by writing the tests to your code, it forces you to produce some code that delivers the output that you had in mind. If that turns out not to be an option, you can always change the tests to fit the needs.

I also use my tests to ensure that templates are only templates. I have multiple times overwritten my template with data used for deployment, and this is not a good thing. By writing these tests and enforcing them, I can make sure that only clean files are present in my artifacts and thereby also my release pipelines.

I hope this part has been insightful, and as always, if you have a way of doing it better, please do let me know so I can improve my code.

The next part will be on the release pipeline, so stay tuned for that.


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.