This is part 2. Part 1 is here:http://wp.me/p1FgqH-bJ
So, to pick up where I left off: I wanted to see if it’s possible to add a bit of continous integration to the process of managing SMA runbooks. SMA ships with PowerShell cmdlets for importing, exporting and other activities, so it shouldn’t be hard.
As far as version control systems go, the basic rule is; if it’s in the file system, you can version it. Which means that all I need is to come up with a way of taking my runbooks on file and pushing them to SMA in a controlled fashion (I’ll get into the VSC part later).
This is the folder structure I decided on. The “root” folder contains a folder for each of the SMA artifact types, such as runbooks, variables, schedules and so on. Right now, I’ve only implemented runbooks.
Then, I built a simple script for pushing these runbook files into SMA. If the runbook already exists in SMA, it will do a check to see if they are identical. If they are, SMA is already running the latest version of the runbook – and if not, the script will push the updated version to SMA. Here’s the code for that script: https://github.com/trondhindenes/PowershellModules/blob/master/SMAstuff/SMA-CI-Import.ps1
Also, I added a config json file to hold a list of all my SMA environments. The idea is that you have at least two separate SMA installs, one for testing and one production. The json file goes in the same folder as the SMA-CI-Import script, and consists of:
This lets me simply reference an SMA install by name instead of having to remember the web service URL. You get it.
So, all this does is that it allows me to add a new runbook script to the runbooks folder, and then run the SMA-ci-import script using something like:
C:\Scripts\SMA-CI-Control\SMA-ci-import.ps1 -RootFolder Scriptrootfolder
-and all changes are pushed straight into SMA and published (note that there are several parameters I’m not using here).
Now that we have a working script for pushing changes from the filesystem into SMA, the rest is relatively easy. First, we need to create a Git repo, which will contain the script root folder. The idea is that every developer on your team has an instance of this git repo on their workstation. If you’re new to Git, the GitHub for Windows kind of makes all the pain go away. I’m using Atlassian’s SourceTree myself. Still, the best thing for learning is to use Git for the cmd window or from PowerShell to get a feel for it. It’s really not hard to do basic stuff like committing and pushing and pulling.
So, at this point you have a working folder and if you’ve setup git correctly (I won’t go into details about this, as there are 1000’s of online tutorials to get started), any change you commit and push locally ends up in the cloud somewhere – here’s my example rootfolder inside GitHub:
The next part is to have some kind of magic automatically update SMA whenever you’ve done a commit. Bear in mind tho – this isn’t the way to do it in prod. You don’t want every random commit to be saved into SMA directly. You would probably implement branching inside your Git repo, and possibly implement some kind of code review process in addition to automated testing. I’m just goofing around here, so don’t do it in prod the way I do it in my lab, mkay?
Anyway, what we need is a CI server. Jetbrains TeamCity is a good choice, as it has native support for executing PowerShell and it runs on Windows Server 2012R2 without a fuss. You can install this on the SMA server or on a separate server (install it on a separate server). Whatever you do, just be sure to install the SMA PowerShell cmdlets on that server, as TeamCity is gonna need those.
After nexting through the TeamCity install (again, there are 1000’s of blogs detailing the product), set up a new project. I called mine “smatest” and set up a “build” for it (I should really call it “deploy”, not “build”. But whatever. This can be done almost automatically simply by pointing to the GitHub repo you created earlier.
So, the idea is that TeamCity polls the github repo for updates every so often. If there are any, it will download them (again, using Git), and invoke the script I wrote earlier against the temporary folder where the git repo is downloaded to – which will in turn push those changes to SMA.
As you can see in the screenshot, I have a very simple deploy pipeline defined. One VCS setting (that’s the github repo details), one trigger and a build step which is running my PowerShell file.
The trigger will simply kick off a build (or deploy, really) every time a new commit is made on the Github repo.
The PowerShell script is defined as such:
The rootfolder argument (you only see the % sign in the picture) is
%teamcity.build.workingDir% – which is an internal TeamCity placeholder for the folder where the git repo is downloaded to.
So, lets see this guy in action. First, I add a new script to the repo folder on my laptop:
Switching to SourceTree (or Github for Windows), the added file is picked up. I add it to the repo (this is how I tell Git to track this file), and commit+push the changes to my repo in the cloud:
After a few seconds, SourceTree has pushed the commit:
After max 60 seconds, my TeamCity server should pick up the change and kick off its build process:
And lo and behold, it executed successfully:
And finally, my WAP console shows that SMA received the new runbook:
Now, this might seem like an overly complex solution to a fairly simple problem. Also, notice that I didnt include any testing in this process – a proper setup would be to have TeamCity first deploy the scripts in a test environment and actually kick off any changed runbooks before verifying the outcome. Only then would the new/updated runbook(s) be deployed to the production SMA. Many folks are also using code review processes based around tools such as gerrit in which one or two fellow developers need to sign off on (or like) your code before the CI server will deploy it.
Ayways. This is how Software development works. Everything belongs in a repo, be it TFS or git or mercurial or something I’ve never heard of. Stuff is testet, controlled, reviewed and most importantly – the whole pipeline is automated where possible. So that you can focus on writing great automation scripts with the certainty that problems can be eliminated as quickly as they were introduced.