Testing before testing

In a previous post entitled Why we use Git, I said this:

When something is checked in to Git, we have VSTS set up to automatically trigger tests on everything. For example, as a bare minimum any PowerShell code needs to pass the PowerShell Script Analyser tests, and we are writing unit tests for specific PowerShell functions using Pester. If any of these fail, we don’t merge the changes into the Master code branch for production use.

Which is really useful, because we don’t want to always have to remember to run PowerShell Script Analyser* against our scripts. When there’s something that you always want to just have happen, it’s best to make it an automatic part of the workflow.

The trouble with this though, is that when you’ve got a number of people checking in their code, someone can push an update into the repo which fails a test, and nobody is going to be able to deploy any of their changes until that failure is fixed, either by the original contributer or someone else. That might be fair enough for something complicated that has dependencies that are only picked up by integration tests, but it’s somewhat annoying when it’s just a PSSA rule telling you that you shouldn’t use an alias, or some other style issue that isn’t really breaking anything.

So the responsible, neighbourly thing to do is to only check in code that you’ve already tested. For PSSA there are a couple of ways we do this. The first is using ISESteroids. This isn’t a free option, so only those of us that spend a lot of time in the PowerShell ISE have a license for this. ISESteroids is probably worthy of another post, but one of its many features will display any PSSA rule violations in the script editor as you’re writing your PowerShell code.

ISESteroidsPSSA

The other method is completely free, using GitHooks. You can use GitHooks to do all sorts of things, but I’ve got one set up to run PSSA prior to commit. This means that any files that Git is monitoring, will have this action taken on them before it commits changes.

To do this, you need to create a folder called “hooks” inside your .git folder. Inside that, I’ve got a file called “pre-commit” (no extension) which contains this:

#!/bin/sh
echo
exec powershell.exe -ExecutionPolicy Bypass -File ‘.\.git\hooks\pre-commit-hook.ps1’
exit

That pre-commit-hook.ps1 file, contains this:

#pre-commit-hook.ps1
Write-Output ‘Starting Script Analyzer’
try {
$results = Invoke-ScriptAnalyzer -Path . -Recurse
$results
}
catch {
Write-Error -Message $_
exit 1
}
if ($results.Count -gt 0) {
Write-Error “Analysis of your code threw   $($results.Count) warnings or errors. Please go back and check your code.”
exit 1
}

Now, if you’re using Visual Studio Code as your editor (as some of us are), you’ll see the output of that script in the Output pane in VSCode like this:

vscodegithooks

This means you can go back and correct them before you push to the team repo and make the continuous integration fail for everyone else, and you don’t even have to remember to do it. 🙂

* You can read more about PowerShell Script Analyser in this post.

PowerShell Script Analyzer

When we check PowerShell scripts in to our Git repository, one of the things that happens automatically is that the Visual Studio Team Services build agent kicks off the PowerShell Script Analyzer to check the code.

This is a module that the PowerShell team at Microsoft have create to help check for best practices in writing PowerShell. Some of the things that it picks up on are just good things to do for readability of scripts, like not using aliases, but others are more obscure, like having $null on the left side of a comparison operator when you want to see if a variable is null – there’s a good reason for that – just trust it. 😉

This means that scripts that don’t comply with the rules don’t get to be automatically deployed into production, which is a good thing, but it also can block someone else’s working code from getting released. That being the case, it might be worth checking your PowerShell before checking it in. Fortunately that’s only going to add seconds on to the process, and it’s quicker than waiting for the results from the build agent.

There are two basic ways to install Script Analyzer. If you install as an Administrator, it’s going to get the best coverage for use on that specific machine, or if you install it for the current user, it’s going to install in your home directory and follow you round to other machines.

Running PowerShell as an Administrator, type:

install-module psscriptanalyzer

Or, running with your normal user account, type:

install-module psscriptanalyzer -scope CurrentUser

PowerShell is going to pop up a warning, saying:

You are installing the modules from an untrusted repository. If you trust this repository, change its Installation Policy value by running the Set-PSRepository cmdlet. Are you sure you want to install the modules from ‘PSGallery’?

We aren’t going to worry about changing the trust, we just need to say ‘Yes’ to this and the module will be installed. (Always be very careful about doing this with any other modules!!)

Now that it’s installed, to check your script once you’ve saved it, you just need to run:

Invoke-ScriptAnalyzer c:\whatever\myscript.ps1

If it’s all good, you’ll see nothing in return (you can always stick a -verbose on the end to see what it’s actually checking as it does it), or you’ll get some feedback about which rules have been broken, which lines they are on, and some guidance on how to get into compliance, like this:

pssa
(
click on the image to see it full size)

If it’s not clear enough from the feedback, a quick web search for the RuleName should give you plenty to go on.

If you want to know more about Script Analyzer and how it works, it’s all open source on the PowerShell Team’s GitHub repository.

Why we use Git

Having all of our scripts and configurations in a single source code repository provides us with a single source of truth which is available to all team members. As we move towards having more systems built using Infrastructure as Code, this removes knowledge silos and reliance on single domain-experts. Everything being version controlled means that we have a full audit of changes; who made what change, when, and why. That means we can roll backwards and forwards in time to use scripts and configurations in different states.

On the Windows side, we initially used Microsoft’s Team Foundation source control, as part of Visual Studio Team Services, since this was used on a smaller project. We’d also used Subversion to manage configs on the Unix/Linux side of the estate. When expanding out the usage to a larger team of people, and for more systems, we felt that it made sense to migrate to Git for a number of reasons:

  • Git has excellent cross-platform support. You can use it with whichever editor/IDE you want on Windows, Linux, or Mac.
  • Git supports branching, which offers more flexibility for a diverse team working on different areas and merging into a single source control repository. It also allows us to ensure that anything going into the Master (or production) branch has passed tests.
  • Git is widely used in the community. We are increasingly finding community resources on GitHub, and we would aspire to contribute some of our work back to the community. It makes sense to be using the same tool.
  • The message we’re hearing from the DevOps community is “Use source control. Whatever source control you’ve got is fine, but if you don’t currently have any; use Git.”
  • Visual Studio Team Services is just as comfortable to use with Git as it is with TFS, if not more so.

Hosting our Git repository on Visual Studio Team Services offers a number of advantages:

  • When something is checked in to Git, we have VSTS set up to automatically trigger tests on everything. For example, as a bare minimum any PowerShell code needs to pass the PowerShell Script Analyser tests, and we are writing unit tests for specific PowerShell functions using Pester. If any of these fail, we don’t merge the changes into the Master code branch for production use.
  • Changes to code can be linked to Work Items on the VSTS Kanban board.
  • Microsoft’s Code Search extension in VSTS allows rich searching through everything in the repository.

In addition to our scripts and configurations, there are advantages using a version controlled repository for certain documents. By checking documents in to Git, we can see the history of edits, which may be important when it would help to know how a document and a configuration both looked at some point in the past. By having documents in a cloned Git repository, we also have access to them when network conditions may otherwise not allow.

WinOps 2016

Last week, Jonathan Noble and I attended the WinOps 2016 conference in London; this was a conference centred around the subject of using DevOps working practices with Windows Servers, which is something that Microsoft are focusing a lot of effort on, and something that ISG have taken a lot of interest in. I’ve been told that videos of the talks will soon be available on http://www.winops.org, and I would strongly recommend them for anyone who works with Windows Servers in any capacity. (Update: videos are now available at https://www.youtube.com/playlist?list=PLh-Ebab4Y6Lh09SnM63euerPW0-pauO7k).

The day started with a keynote speech by Jeffrey Snover, from Microsoft; I’m not sure of his current job title as it keeps changing, but he invented PowerShell and is basically in charge of Windows Server.

The speech covered the evolution of Windows Server from Windows NT, right through to Server 2016, explaining how the product was continuously changed to meet the needs of the time, which flowed nicely into an overview of Server 2016, designed to enable cloud workloads.

A big part of Server 2016 is the concept of ‘Just Enough Operating System’ and the new Nano Server installation option. For those not aware, Nano Server is the next logical step after Server Core; where Server Core removed the Desktop Experience, in order to improve the security, reliability, and speed of your servers, Nano Server strips out absolutely everything unnecessary. It’s not possible to login to a Nano Server in any way – they’re controlled entirely by remote management tools, and PowerShell Remoting. This has enabled Microsoft to shrink the Operating System down to under 500MB. It takes up less space, runs faster, boots in seconds, and requires only a small fraction of the number of patches and reboots that Server with Desktop Experience requires. Jeffrey went as far as to say that Nano Server is “the future of Windows Server.”

Also coming with Server 2016 is support for Docker-compatible Containers. If you’re not familiar with these, it’s worth getting acquainted – one server can run multiple containers, and each will function as if they were their own server, completely isolated from each other, but sharing the underlying operating system and other resources from the host machine. The container itself is a single object, making it very simple to transfer between hosts, or to duplicate and spin up multiple copies of a containerised application.

A couple of other important technologies touched upon were Windows Server Apps (WSA) – a new way of deploying applications based on AppX; Server Support for MSI will become deprecated in favour of WSA, largely because MSI is horrible and unsuitable for server environments – and Just Enough Administration (JEA) – a new PowerShell feature which allows the creation of PowerShell endpoints which users can connect to perform a specified subset of admin tasks, without requiring to be administrators on the target server (even if the tasks would usually require it); this means that you don’t need to hand over the keys to your kingdom in order to let someone perform a few updates or run backups.

The second talk of the day was by Iris Classon, a C# MVP who works for Konstrukt. Iris’s talk was entitled “To The Cloud” and discussed the journey that her company made while moving their services to Azure. Key points of the talk were discussions around the automation of manual processes, such as unit testing, integration testing, and operational validation testing, as well as deployment. She also advocated heavily for using JEA (mentioned above) to prevent system administrators from having access to sensitive data that they didn’t need to see.

The third talk of the day was by Ed Wilson, who works on Microsoft’s new Operations Management Suite (OMS), and is the author of the Hey, Scripting Guy! blog. The talk was primarily an overview of OMS, which is a suite of tools designed to offer Backup, Analytics, Automation, and Security Auditing for hybrid cloud/on-premises environments. OMS is constantly under active development with new features coming online all the time, so it’s definitely worth keeping an eye on. Highlights so far are:

  • OMS Automation (formerly Azure Automation), which has been described as PowerShell as a Service – it offers a repository where PowerShell run books can be stored and run on a schedule.
  • Secure Credential Store – exactly what it sounds like – store credentials securely so that you can use them from the rest of OMS.
  • Windows and Linux machines are supported for monitoring (as well as anything else that can output a text-based log file).

Fun fact mentioned in this talk: PowerShell is now ten years old. Probably time to pick it up if you haven’t yet done so 😉

Next up was Michael Greene, who works on Microsoft’s Enterprise Cloud Customer Advisory Team, who gave an excellent talk about using Visual Studio Team Services, PowerShell, and Pester to implement a release pipeline for applications and infrastructure. This was particularly interesting to me, as these are the tools that we’re using in ISG, and I’ve spent the last couple of months trying to do exactly this. Michael was strongly advocating configuring infrastructure as code, which allows the use of proper source control, automated testing, and automated deployment (only if all of the automated tests pass); working in this way has been shown to greatly improve reliability and agility of IT services.

Some excellent further reading on this subject was offered in the form of Microsoft’s whitepaper: The Release Pipeline Model (http://aka.ms/thereleasepipelinemodel) and Steven Murawski’s DevOps Reading List (http://stevenmurawski.com/devops-reading-list/).

Soundbite: If you want to work with Windows Server, the most important technology to learn right now is Pester.

During lunch we had a wander round stalls set up by vendors trying to sell their various DevOps-related products. One that interested me was Squared Up, a configurable dashboard that presents SCOM data (among other things) in a nice, easy to understand manner. I signed up for a free trial, before we discovered that the University already pays for this product. I need to chase this up with our contacts to get myself access to it.

After lunch, the talks split into two streams, so we split up in order to cover more ground. I’ll let Jonathan describe the talks he went to here…

My first afternoon session was with Richard Siddaway, covering Nano Server and Containers. This was really a practical demo following on from Jeffrey’s keynote, stepping through the process of configuring both with the caveat that all of this is pre-release at the moment. It was interesting to note that while Microsoft initially started out by building a PowerShell module to manage containers directly, as a result of feedback they’re re-engineering that to just be a layer on top of Docker, which is the tool that most people use to manage containers today. Another thing that I picked up was that as things stand, there’s no way to patch containers, yet they need to be at the same patch level as the host. The solution is to just blow it away and make a new one, but as was demonstrated, it’s quick and easy to do, so probably the most sensible approach anyway. We need to examine these two technologies carefully over the coming months. Richard mentioned the need to consider version numbering on containers, and which workloads they are suitable for. That’s partly dictated by the workloads that Nano Server will support, which will be limited at launch, but will likely grow reasonably quickly.

Following that, I went to a panel session on technologies, which gave me a shopping list of things to skill up on! The panel agreed that the two most important aspects of the toolchain were Source Control and Build, where the specific tool isn’t important – for Build it just needs to be something that will run scripts, and while it was suggested that any Source Control would be ok, if you didn’t already have something, you should choose Git. On the subject of the most significant tools from the community, Pester and Docker were highlighted. Other things that the panel suggested learning about were JavaScript/Node (although TypeScript is preferable to generic JavaScript), OMS, Linux, and Visual Studio Code. Another couple of interesting points I took from this were that containers don’t remove the problem of configuration management; they just move it, and that Azure Stack would work well for a hybrid model where you would usually host a workload on-premesis, but could burst up to the cloud for particularly busy periods.

…and while he was doing that, I went to a talk by Gael Colas – a Cloud Automation Architect (if anybody is thinking of overhauling our job titles any time soon, I quite like this one) – about configuration management theory.

This was one of my favourite talks of the day – Gael was making the case for short-lived, immutable servers. The general concept is that a server should be built from configuration code or scripts (the exact method is unimportant; what matters is that it’s completely automated), and then never changed at all – no extra configuration, no quick fixes, no patches. When the server needs to be changed (for patches, for example), the source configuration/script should be updated instead, and a new server deployed from that. This method ensures that we always know the exact configuration of a server and we’re always able to rebuild it it identically, every time – this has massive DR and service reliability benefits. This was referred to as Policy Driven Infrastructure. Gael did acknowledge that there are some applications for which this is unsuitable, but they’re rapidly shrinking in number.

The next session I went to was a panel session called DevOps Culture in a Windows World, which mostly turned into people offering advice about how they’ve convinced their organisations to embrace DevOps working practices. You’ll probably see me attempt to use most of the ideas presented over the next few months – this blog post is the start 😉
Two things that I will mention here were the suggestions that it’s important to improve visibility – which I think is something that our department could benefit greatly from – everyone should be able to easily see what everyone else is doing, and should be encouraged to share and help each other (I think we are encouraged to share, but we currently lack the tools to easily do this; I have some ideas about that one but need to work them through) – and the suggestion that we should look at our services like products, and consider their full lifespan when we set them up, instead of thinking of the set up of a service as a project which is completed once the service is up and running, and then left to rot indefinitely.

The last proper talk of the day was given by Peter Mounce of Just Eat, who was discussing how they run their performance testing. Performance is very important to Just Eat, and they work to keep their applications fast by testing their production environment twenty four hours a day. The theory is that running performance tests in QA is meaningless, because it’s impossible to replicate the behaviour of millions of real people using the production application, so they simply pile a load of fake load on their production servers. The fake load increases as real load increases, so that they’re effectively doubling the load on their application all the time – this means that they know that they can take that much load, and they’re able to disable the fake load in case of emergency to handle massive amounts of real load. In general, I’m not sure that the performance testing elements are that applicable to us at this stage, but there was a lovely soundbite which is very applicable to us: Embrace the fact that things are going to break; get better at fixing them quickly.

Finally, everybody came back together for a panel session and discussion, which was interesting, but nothing exceptional to report, then we went for drinks at the expense of Squared Up.

Windows PowerShell 4.0 quick reference guides

Microsoft have released a number of cheat sheets, offering useful shortcuts and info for PowerShell 4.0, as well as a few of its related technologies such as DSC, WMI, and WinRM.

You can download these in PDF format from http://www.microsoft.com/en-us/download/details.aspx?id=42554 and then print them out and stick them up next to your desk to impress people who walk by.

 

Free ebook: Introducing Windows Server 2012

Microsoft Press have released a free ebook called Introducing Windows Server 2012, which does exactly what it says on the tin.

There are three versions available, depending on where you want to read it:

Introducing Windows Server 2012 RTM Edition – PDF ebook
Introducing Windows Server 2012 RTM Edition – ePub format
Introducing Windows Server 2012 RTM Edition – MOBI format

I read the version of this book that was based on the beta and found it very informative. It’s now been updated to the RTM version, so there’s no reason not to grab it now.

PowerShell 3.0 for Windows 7 and Server 2008

Along with the launch of Windows Server 2012* yesterday, Microsoft released the Windows Management Framework 3.0 for some downlevel clients. In the package you get PowerShell 3.0, and updated versions of WMI and WinRM for Windows 7 SP1, Windows Server 2008 R2 SP1 and Windows Server 2008 SP2. If you were looking for support on XP and Vista you are out of luck.

WMF 3.0 also contains the Server Manager CIM Provider that you’re going to need on your 2008 R2 SP1 and 2008 SP2 servers if you want to manage them with the new Server Manager in Windows Server 2012 or Remote Server Admin Tools for Windows 8 (RSAT for Win8 is yet to reach RTM).

Download WMF 3.0 at http://www.microsoft.com/en-us/download/details.aspx?id=34595

* Make sure you click that link to the online launch event; windows-server-launch.com has a load of learning resources for Microsoft’s amazing new Server release, especially around management and virtualisation.

Windows 8 Release Preview, PowerShell 3.0 and Windows Server 2012 Release Candidates – All the Links You Need

Yesterday Microsoft posted the Release Preview of Windows 8 for download. You can go and get the setup executable from:

Download Windows 8 Release Preview

If you want ISO images so you can prep some removable media, you need to go to:

Windows 8 Release Preview ISO images

Once you’ve installed the Windows 8 Release Preview, if you’re one of those lucky, lucky people who spends their day managing Windows Server, you’ll also want to get these:

Remote Server Administration Tools for Windows 8 Release Preview

If you’re one of the enlightened people who uses Windows PowerShell, but you can’t upgrade to Windows 8 RP just yet, then you can still get the version 3.0 Release Candidate goodness on Windows 7 SP1 or Windows Server 2008 SP2 or Windows Server 2008 R2 SP1:

Windows Management Framework 3.0 – RC

If you’re looking to try the Release Candidate of Windows Server 2012, then you want to go to:

Download Windows Server 2012 Release Candidate (RC) Datacenter

Lots of new fun things to play with. Thank goodness we have a long holiday weekend to be able to get all this installed! 🙂

Windows Server 2012 Virtual Labs

When it releases later this year, Windows Server 2012 will bring a stack of exciting new features and enhancements, like the fantastic multi-server management features of the new Server Manager, and of course PowerShell v3.0!

If you want to get ahead of the curve on Server 2012, then there’s no better way that digging in and getting your hands dirty, although not everyone has a whole load of spare hardware to setup a test lab, and even if you do, then it’s sometimes difficult to know where to start, especially since pre-release software tends to be lacking some of the documentation that you might want to really explore a feature in depth.

To that end, Microsoft have produced a load of Windows Server “8” Beta Virtual Labs (put together before the Windows Server 2012 name was announced). These are self-contained modules focusing on the following:

  • Active Directory Deployment and Management Enhancements
  • Configuring a Highly Available iSCSI Target
  • Configuring Hyper-V over Highly Available SMB Storage
  • Implementing Storage Pools and Storage Spaces
  • Introduction to Windows PowerShell Fundamentals
  • What’s New in Windows PowerShell 3.0
  • Managing Branch Offices
  • Managing Network Infrastructure
  • Managing Your Network Infrastructure with IP Address Management
  • Managing Windows Server “8” with Server Manager and Windows PowerShell 3.0
  • Online Backup Service
  • Using Dynamic Addess Control to Automatically and Centrally Secure Data

In addition, you might want to check out some of the Resources for IT Professionals that Microsoft have published in relation to the TechEd conference that will start in a month in Orlando.

(Thanks to my friend @Alexandair for both of those links)

PowerShell On-Ramp

Last week I was up in Edinburgh with Microsoft presenting at their IT Pro Camps. We had multiple mentions how PowerShell can help you with the managment in the three topic areas we were covering: Hyper-V, Private Clouds and Consumerisation (supporting BYOD scenarios). Taking the audiences for those three days as a representative sample, it looks like about half of the Windows-based IT community still hasn’t begun their PowerShell journey.

Now is a great time to get started with PowerShell, especially with its increased prevalence in Windows Server 2012 and some of the great improvements in PowerShell 3.0. For those just getting started, I’m encouraging everyone to get familiar with the following four cmdlets (pronounced “command-lets”) in particular:

Get-Command finds all the commands (including aliases and functions) that are available to you in the current shell.
Get-Member tells you about the objects on the pipeline which the previous cmdlet has output. eg. Get-Process | Get-Member
Get-Help provides help about cmdlets and features of PowerShell (in v1 and 2 this is all in the box; with PowerShell 3.0, you need to Update-Help).
Get-PSDrive tells you about the drives that PowerShell is exposing; not just the file system, but the registry and others.

Given those four cmdlets, you can get a long way by yourself, just through experimenation. Don’t forget to use the -whatif parameter (or -confirm) on any cmdlet that might change something.
eg. Get-Process | Where {$_.name -match “^s”} | Stop-Process -whatif

In PowerShell 3.0, you can benefit a lot from using the Show-Command UI in the PowerShell ISE (Integrated Scripting Environment) to help you with the parameters required to achieve a task. As a beginner to PowerShell, you can also learn a lot by looking at other people’s scripts; the Microsoft Script Explorer for Windows PowerShell is a useful tool to find scripts and other great resources from online repositories.

There’s no better way to learn PowerShell than to write some scripts that solve real problems, and there are a number of pre-canned problems that you can take a shot at in the Windows PowerShell Scripting Games. You’re too late to enter this year’s competition, but you can still try out the challenges, and once you’ve given it a try, you can see the expert solutions from some of the top members of the PowerShell Community.

I suggest that while you’re getting used to PowerShell, you print a couple of quick reference guides (or cheat sheets) and keep them close to hand, or pinned up beside your monitor. You can also download a great free ebook called Mastering PowerShell from PowerShell.com, where you’ll also find a bunch of other great resources (including another free ebook on PowerShell remoting, for when you’ve got to grips with the basics).

If you like to go and buy a book, then the beginner’s book of choice is Don Jones’ Learn Windows PowerShell in a Month of Lunches.