Category Archives: System Center

Using Azure Automation with Azure Recovery Services (or intelligent failover using PowerShell)

Azure Recovery Services (or “MARS”) went GA a few days back, and it’s a very interesting offering. The ability to fail over to a DR site is a wet dream for many it folks, and this service brings that within reach for the masses.

The idea of failing over to Azure is simple enough on paper. Take one or more of your VMM-managed VMs, and enable replication to Azure. Then you define one or more recovery plans, which describe the order you want to spin up your vms in case of a failover. This way you can ensure that your database servers are online before you bring up your application servers, for instance.

2014-10-04 00_09_51-Recovery Services - Windows Azure

An interesting thing to note about these recovery plans, is the ability to kick of a runbook stored in Azure Automation as a pre- or post- step per group inside your recovery plans. This gives you a lot of freedom in defining the logic for doing the actual failover. For instance: Yu can have a runbok which will poll the SQL endpoint of your DB tier and not continue until those are truly up and running. There’s no brancing logic or anything, but you can build a lot of that into the automation runbooks anyway.

So, to the topic: Just as the link from SPF to SMA runbooks inside Azure Pack you’ll need to use a special parameter to actually get data about the migrated vms when you design your failover runbooks, and that parameter is called $PsPrivateMetaData. Here’s a few details about how your can use it:

As you can see, I’ve linked a single runbook as a post-step after my single VM has failed over to the cloud:
2014-10-04 00_15_37-Recovery Services - Windows Azure

So, the excersize is to get some information about the failed-over vms into that runbook. Here’s a closer look at start of the runbook itself:

workflow Failover-RecoveryService
{
    #$PsPrivateMetadata
    $resultobj = InlineScript {
        $PsPrivateMetadata = $using:PsPrivateMetadata
        $b = $PsPrivateMetaData.RecoveryPlanContext | ConvertFrom-Json
        $vmmap = $b.VmMap

        $items = @()
        $Items += $vmmap | Get-member | where {$_.MemberType -eq "NoteProperty"} | select -ExpandProperty Name

        $resultobj = @()

        foreach ($item in $items)
        {
            $itemobj = $vmmap.$item
            $resultobj += $itemobj
        }
        #return the stufs
        $resultobj

    }

The $PsPrivateMetadata is kind of a weird object, and contains a mix of regular objects and json strings. From my testing, the “vmmap” object as an attribute per vm, and that attribute needs to be converted from json. Since we get a hashtable back, I found that I needed to list all NoteProperty types on that object, and use dynamic parameters to get each one out. This is not supported in native Workflows, so i needed to use an InlineScript for that. I’m sure there are cleaner ways of doing this but the above should work.

What you’ll get back if you use the code above is an array of objects, and each object contains two attributes:

$ServiceName = $FailedOverVm.CloudServiceName
$VMName = $FailedOverVm.RoleName

This should be enough for you to use the regular Azure PowerShell cmdlets to get more details about the vm, such as it’s internal IP address and so on.

If you’re interested in an example of the $PsPrivateMetadata object, I’ve taken a clixml-export of one and uploaded it here, so you’ll hopefully have an easier time of figuring this stuff out than I had. Mind you, I’ve only tested it with one single vm, so no guarantees on how those objects look when there’s more than one. I hope I’m right tho.

So, the demo I’m building will failover a VM running a website, with DNS records hosted in Amazon AWS (and yes, you can upload the AWS PowerShell module to Azure Automation). When the VM fails over to the cloud, the runbook will make sure that the necessary endpoints are active, and that the DNS records and health probes in AWS are updated with the correct VIP address. Fun stuff!

A finished example of “smart” SMA runbook publishing

UPDATE 24.09.2014: I got some feedback on these posts regarding SMA and publishing using PowerShell, which pointed me to SMART, which is a collection of scripts for doing various SMA-related things. SMART has it’s own publish-to-SMA script, which you of course can use instead of mine. There’s a couple of things I don’t like so much with SMART; firstly that it expects you to store runbooks inside xml files. From a developer perspective, that’s a step in the wrong direction IMHO. We’re trying to get away from XML, not run towards it. Also, the SMART import scripts simply publish everything twice (as far as I can see) in order to get around the need to publish everything in the right order. You are of course free to use whichever method you’d like, but I prefer the one I’m outlining here. It’s lighter weight, stays true to PowerShell and it’s way way faster. Whichever you choose, the important thing is to end up with a process where you can get your runbooks into source control, not who wrote the script. All is good!

In case you read my last blog post, here is some example code in a more finished form.

To demonstrate, this is my folder of runbooks I’d like to publish:
2014-09-22 17_15_38-wftest

The structure is the same as I used in my previous post, so here is the required order in which to publish these runbooks:
sma2

So, let’s test this thing. This is what happened on the first run:
2014-09-22 17_12_38-Windows PowerShell ISE

Since none of the runbooks were already in SMA, each of them got published. If you note the line beginning with “PUBLISH” you’ll see that the script did everything in the required order.

If I just run everything again, the script won’t publish anything, as every runbook is already published:
2014-09-22 17_12_59-Windows PowerShell ISE

Now, the hard part: I’ve made a sligth adjustment to the wf3 runbook file. This should cause my script to update wf3, but also notify wf1 and wf5 that they need to get updated, since they reference wf3. Again, this seems to work as it should:
2014-09-22 17_21_55-Windows PowerShell ISE

Note that wf3, and then wf1 and wf5 are published.

So there. I ‘m happy with this, and hopefully it will provide you with some value as well. Youc an wire up this script inside a CI server or simply run it against your own folder structure containing your runbooks whenever you’ve made an update. Happy automating!

Here’s the script containing the required functions:
https://gist.github.com/trondhindenes/d6133106c30cb7d5b922

A more refined way of getting your runbooks published into SMA using code

Update: If you read this, be sure to read this (finished code for smart publishing)

If you have worked with SMA for more than just preparing for giving a demo (pun very much intended), you have no doubt come to the conclusion that the “copy+paste” way of getting code into SMA is suboptimal. Actually, I’d argue that it works against the whole notion of “Infrastructure as code“.  Despite all this, I really like SMA. It’s like that weird aunt of your, who you love despite all her weirdness. I’m not saying I love SMA, but it sure is a product with really great potential.

Now, I’m luckily not the only one realizing that SMA needs a better story for actually publishing runbooks, and for most folks that story begins with a source control system. There are actually a few posts out there outlining how to use TFS or Visual Studio Online as a source control repo for SMA things. The articles I’ve found share a common flaw:

The idea of a parent-child runbook is not new. However, all of these examples assume that your child runbook is only referenced by one parent, which is of course not the case. In real-life your runbook’s relationship to each other probably look more like this:

sma

There’s no notion of sub- or subsub-runbooks. Each runbook is a piece of code that can be referenced by any other runbook, so the idea of storing runbooks in folders and have the folder level describe the “child-ness” of a given runbook simply doesn’t work in real life.

The reason this parent/child discussion is important in SMA, is that the order which you publish runbooks will determine wether your runbooks will actually run or not. In the example above, if you for example publish Runbook1 before you publish Runbook2, Runbook1 won’t be able to start. It’s a huge bug, and something the SMA team is painfully aware of.

In the meantime, we need to construct the required logic to send runbooks in the right order to SMA. It’s embarrassing, I know.

Anyway, instead of forcing yourself into an arcane structure of folders of parent and child runbooks that will just limit your ability to get anything done, we can tap into PowerShell’s Abstract Syntax Tree to gain an understanding of the relationship between runbooks. This will enable us to scan a folder of runbooks and determine dynamically which order they should be published. In my example above, that would be something like:

sma2

This isn’t too hard to solve:
1. List all files in a folder (each file represents a runbook and contains the workflow code for that runbook)
2. Get each runbook’s child runbooks through its AST
3. Make sure the child runbooks are published before publishing the current one
4. In order to avoid publishing a single runbook multiple times, mark it as already processed.
5. Continue on until all files in the folder are processed.

Here’s how to get a list of child runbooks given a path to a file containing the code for a parent runbook:

 

$ThisWf = get-content $path -Raw
$ThisWfSB = [scriptblock]::Create($ThisWf)
$TokenizedWF = [System.Management.Automation.PSParser]::Tokenize($ThisWfSB,[ref]$null)
$referencedCommands = $TokenizedWF | where {$_.TYpe -eq "Command"}

Note that the $referencedCommands variable will contain all commands referenced, not just child runbooks (such as Write-output and so on). So, wee need to find the commands which represent runbooks, which should also be present in the same directory:

    foreach ($referencedCommand in $referencedCommands)
    {
        $runbookpath = get-childitem -Path $basepath -Recurse:$recurse |where {$_.BaseName -eq $referencedCommand.Content}
        if ($runbookpath)
        {
            Write-Verbose "REFERENCE: $($path.BaseName)--> $($referencedCommand.content)"
            Process-RunbookFile -path $runbookpath.FullName
        }
    }

This piece of code is actually part of a nested function which will iterate on itself until all child runbooks have been imported, and then import itself.

Also note that the current solution does not verify the actual existence of all commands used in a runbook. If one of your runbooks calls a function or workflow called “foo-bar” and the foo-bar.ps1 file isn’t present in the folder structure, we’ll just assume that that function is available via some installed module or whatever.

The last part of the puzzle is to use an arraylist to store information about already processed files, so that the same file (runbook3, for instance) doesn’t get uploaded more than once.

After creating some mock runbooks with the references I’ve outlined in the diagram above, I can test out my logic by running my script (the whole piece of code can be found below):
2014-09-18 22_35_52-Windows PowerShell ISE

As you can see, the order of publishing is correct according to my outline diagram.

There is one shortcoming with my script as it sits right now: Everything will have to be re-published on each run, which can slow down a CI process considerably. For instance, if I perform a minor update on the wf3 script, I need to re-publish wf1 and wf5 as well, since these runbooks reference wf3. I’m still thinking about how to do that – keep watching this space!

The code can be found here:
https://gist.github.com/trondhindenes/faeb6c1212f4a3e2457c

Note that the code doesn’t actually implement the publishing to SMA part, which is really the easy part. The code below simply demonstrates a logic you can implement in your own code in order to publish in the right order. So, around the line saying “Write-Verbose PUBLISH” you need to put in your own code for performing the actual publish.

Hopefully this will get you a little bit closer in the quest for a sane method of publishing runbooks in a controlled fashion. You might also be interested in another CI/Version Control/SMA post I wrote a while back.

 

 

My little Virtual Machine Manager Add-In: VMM Extensions

I really like VMM 2012R2. I like the console. I like the PowerShell support, although it can err on the complex side for some operations (like, creating a VM). However, there is some day-to-day stuff that’s simply missing from the GUI. I decided to try and create a small Add-In that would mitigate some of these shortcomings.

Information like a VM’s ip address, it’s path on the filesystem and whether or not it has an iso mounted is helpful info to have when you’re dealing with managing tons of VMs using VMM. If you’re a PowerShell user all that info is in there, it’s just not exposed in a very nice way in the VMM console. So, my add-in does the following:

1. It sets up a few custom properties for VMs: VMPath, ISO and IP Address (if they’re missing) – this actually happens the first time you click the button.

2. Then, when you click the “Get VM Paths” button it will retrieve that information and update the custom properties on that VM. All you have to do is to add those fields to your default vm view and all should be good. Here’s how it should look:
vmmaddin
And here’s the button to click in order to update a VM’s info:
VMMAddInButton
The button supports selecting multiple VMs.

BTW: We actually have an SMA job which updates this info for every single VM in our environment, so that we only need to hit the button if we suspect that the information is out-dated.

Also note that the IP address shown is the first ip address from the first NIC as reported by the get-scvirtualnetadapter cmdlet.

Download the zip file here, and just head to Settings–>Console Add-ins in your VMM console to install it.

The entire solution if free for you to download if you want to create something similar on your own: https://github.com/trondhindenes/vmm-extensions

 

 

 

 

SMA-CI: Continous Integration adventures with System Center Service Management Automation -part 2

This is part 2. Part 1 is here:http://wp.me/p1FgqH-bJ

So, to pick up where I left off: I wanted to see if it’s possible to add a bit of continous integration to the process of managing SMA runbooks. SMA ships with PowerShell cmdlets for importing, exporting and other activities, so it shouldn’t be hard.

As far as version control systems go, the basic rule is; if it’s in the file system, you can version it. Which means that all I need is to come up with a way of taking my runbooks on file and pushing them to SMA in a controlled fashion (I’ll get into the VSC part later).

This is the folder structure I decided on. The “root” folder contains a folder for each of the SMA artifact types, such as runbooks, variables, schedules and so on. Right now, I’ve only implemented runbooks.

Then, I built a simple script for pushing these runbook files into SMA. If the runbook already exists in SMA, it will do a check to see if they are identical. If they are, SMA is already running the latest version of the runbook – and if not, the script will push the updated version to SMA. Here’s the code for that script: https://github.com/trondhindenes/PowershellModules/blob/master/SMAstuff/SMA-CI-Import.ps1

Also, I added a config json file to hold a list of all my SMA environments. The idea is that you have at least two separate SMA installs, one for testing and one production. The json file goes in the same folder as the SMA-CI-Import script, and consists of:
{
“EnvironmentName”:  “Default”,
“EnvironmentType”:  “Production”,
“WebServiceUrl”:  “https://sma-sma.smatest.hindenes.com
}

This lets me simply reference an SMA install by name instead of having to remember the web service URL. You get it.

So, all this does is that it allows me to add a new runbook script to the runbooks folder, and then run the SMA-ci-import script using something like:

C:\Scripts\SMA-CI-Control\SMA-ci-import.ps1 -RootFolder Scriptrootfolder
-and all changes are pushed straight into SMA and published (note that there are several parameters I’m not using here).

Now that we have a working script for pushing changes from the filesystem into SMA, the rest is relatively easy. First, we need to create a Git repo, which will contain the script root folder. The idea is that every developer on your team has an instance of this git repo on their workstation. If you’re new to Git, the GitHub for Windows kind of makes all the pain go away. I’m using Atlassian’s SourceTree myself. Still, the best thing for learning is to use Git for the cmd window or from PowerShell to get a feel for it. It’s really not hard to do basic stuff like committing and pushing and pulling.

So, at this point you have a working folder and if you’ve setup git correctly (I won’t go into details about this, as there are 1000’s of online tutorials to get started), any change you commit and push locally ends up in the cloud somewhere – here’s my example rootfolder inside GitHub:

The next part is to have some kind of magic automatically update SMA whenever you’ve done a commit. Bear in mind tho – this isn’t the way to do it in prod. You don’t want every random commit to be saved into SMA directly. You would probably implement branching inside your Git repo, and possibly implement some kind of code review process in addition to automated testing. I’m just goofing around here, so don’t do it in prod the way I do it in my lab, mkay?

Anyway, what we need is a CI server. Jetbrains TeamCity is a good choice, as it has native support for executing PowerShell and it runs on Windows Server 2012R2 without a fuss. You can install this on the SMA server or on a separate server (install it on a separate server). Whatever you do, just be sure to install the SMA PowerShell cmdlets on that server, as TeamCity is gonna need those.

After nexting through the TeamCity install (again, there are 1000’s of blogs detailing the product), set up a new project. I called mine “smatest” and set up a “build” for it (I should really call it “deploy”, not “build”. But whatever. This can be done almost automatically simply by pointing to the GitHub repo you created earlier.

So, the idea is that TeamCity polls the github repo for updates every so often. If there are any, it will download them (again, using Git), and invoke the script I wrote earlier against the temporary folder where the git repo is downloaded to – which will in turn push those changes to SMA.

As you can see in the screenshot, I have a very simple deploy pipeline defined. One VCS setting (that’s the github repo details), one trigger and a build step which is running my PowerShell file.

The trigger will simply kick off a build (or deploy, really) every time a new commit is made on the Github repo.

The PowerShell script is defined as such:

The rootfolder argument (you only see the % sign in the picture) is
%teamcity.build.workingDir% – which is an internal TeamCity placeholder for the folder where the git repo is downloaded to.

So, lets see this guy in action. First, I add a new script to the repo folder on my laptop:

Switching to SourceTree (or Github for Windows), the added file is picked up. I add it to the repo (this is how I tell Git to track this file), and commit+push the changes to my repo in the cloud:

After a few seconds, SourceTree has pushed the commit:

After max 60 seconds, my TeamCity server should pick up the change and kick off its build process:

And lo and behold, it executed successfully:

And finally, my WAP console shows that SMA received the new runbook:

Now, this might seem like an overly complex solution to a fairly simple problem. Also, notice that I didnt include any testing in this process – a proper setup would be to have TeamCity first deploy the scripts in a test environment and actually kick off any changed runbooks before verifying the outcome. Only then would the new/updated runbook(s) be deployed to the production SMA. Many folks are also using code review processes based around tools such as gerrit in which one or two fellow developers need to sign off on (or like) your code before the CI server will deploy it.

Ayways. This is how Software development works. Everything belongs in a repo, be it TFS or git or mercurial or something I’ve never heard of. Stuff is testet, controlled, reviewed and most importantly – the whole pipeline is automated where possible. So that you can focus on writing great automation scripts with the certainty that problems can be eliminated as quickly as they were introduced.

SMA-CI: Continous Integration adventures with System Center Service Management Automation -part 1

This is part 1. Part 2 is here: http://wp.me/p1FgqH-bO

If you’ve read this blog for a while, you know that I’m not a huge fan of the “original” System Center Orchestrator (Opalis) product. Its promise is sound, but the execution leaves a bit to be desired. Actually, a colleague asked me the other day during a lunch discussion: “Trond, do you hate System Center?”. I told him no, but it’s definetely a product suite which has some excellent bits, and some that are not so excellent.

In any case, fast-forward to today and Microsoft’s newest addition to the relative chaos that is System Center; Service Management Automation (SMA). It’s installed from the Orchestrator media, although it doesn’t share any code with Orchestrator. I guess they had to put it somewhere, and there was room on the CD.

SMA can be used as a stand-alone product with not much required besides a SQL Server database, but most folks plug it into the Windows Azure Pack (WAP), which serves as the “gui” for SMA. If you want it – you don’t have to.

When it all comes down to it, SMA is a repository for automation runbooks in the form of PowerShell workflows. it also includes a library of “resources” where you can store “global” stuff such as credentials, variables and schedules. These can be used by any of your runbooks. I like this idea.

Here’s the deal: The end of wire for all this, is to try and run our datacenters by the notion of infrastructure as code. This is why we implement PowerShell scripts. Repeatability. Self-documentation. Version control. That version control part is what we as Windows IT Pros historically have not been particularly good at. The reason is logical enough; what we used to do couldn’t easily be put into version control. We didn’t deal with code that could be checked in or out or committed, we dealt with wizards and mouse-clicks. Configuration used to be thousands of screenshots in a word document.

Times are changing though, and its good. The challenge with SMA as it sits in the current 2012R2 version, is that it doesn’t lean itself very well to version control. For each Runbook there is the notion of a “published” version and a “draft” version. A draft version can be saved/edited without disturbing the published version, and when everything’s good and ready, it can be “promoted” or “published” to become the new published version.

At first glimpse this may look like some kind of simple version control, but there’s an important piece missing: The ability to roll back. There is no going back – once a draft has been published, the previous published version simply disappears.

if you plan to use SMA to host 10-20 runbooks, this is not a huge deal – the small number of runbooks should be manageable. However, imagine 100. Or 1000. It is just a matter of time before someone makes a mistake and introduces something fatal into the mix. Which is why you need to be able to roll back. Which, in turn, is part of the reason we need version control.

I won’t bore you with the specifics, but look at a typical workflow a developer works by:

Develop (change) –> Unit test (does the code work) –> Integration test (does the code work together with other code) –>deploy (to test or production)

In SMA, you kinda have the ability to (manually) test your code in draft version before hitting publish, so the workflow would be something like:
Develop (change) –> Test –> Publish

The problem, of course, comes when its time to roll back (undo) a change. In SMA, the workflow for this would be:

? –> Darn.

So, with all this in mind I set out to try and bridge this obvious gap between “true” infrastructure as code and SMA. And since you’ve probably stopped reading this rant ages ago, I’ll stop here and jump to part 2.

Measuring Exchange mailflow performance using PowerShell, SCOM, Azure VMs and MVC

Ouch, another way-too-long blog title. Anyways. I’m doing sort of a summer job for my previous employer while waiting for my next project to start. Being a hosted service provider, they’re looking for ways to expose key performance metrics to their customers. For the hosted Exchange service, we looked at using the built-in reports for Exchange 2010 in System Center Operations Manager 2012SP1, which they’re using to monitor the Exchange environment. However, we found that either the reports don’t deliver any metrics the regular customer will understand, or the reports simply do not work (this seems to be a known issue with the Exchange 2010 MP for Ops Manager).

After some healthy discussion, we decidede to keep it simple: Measure the average speed of mail flow from the Exchange service and out to the Internet, and back.

The old Exchange 2003 management pack for Ops Manager actually included functionality along these lines, but the newer versions don’t.

So, here’s what we did: We deployed an Azure VM and installed the hmail server (which is freeware) on it. hMail is a simple-to-use mail server with an excellent com api which makes it really easy to access using PowerShell. The “plumbing” itself was done in PowerShell, and implemented using regular scheduled tasks on the Azure VM. Simply put, the scripts to the following:

1. Retrieve a list of test mailboxes (these reside on the on-prem Exchange servers)
2. For each testmailbox, log on to the mailbox (through autodiscover) using EWS and send a mail to the hmail server in azure. Each mail is stamped with a guid.
3. After mail is sent, upload the details (mailbox, time sent, logon time, guid) to a database using REST calls to the MVC Web API site
4. List all emails in the hmail inbox and map each mail to the correct “test” already in the MVC Web API database (the guid is used for mapping).
5. Update the “test” with received time

A custom SCOM management pack using PowerShell-based rules and monitors are used to discover the test mailboxes and retrieve stats for each test mailbox. Thanks to my “weather forecasting” management pack we already had the basic skeleton of a PowerShell-driven management pack in place. A quick search+replace on object names and then some editing in the scom 2007R2 authoring console was all it took (except for creating new PowerShell scripts for discovery, of course).

I won’t paste any code here since this was a paid task and the MP and code is basically the property of my employer, but you get the idea. However, I’ll post a few gotcha’s:

1. The MVC4 Web API implementation of its rest interface throws some troubles around date handling. Might be because we throw nb-no locale’d dates at it, while the azure web site runs on a server with en-us locale, but it turned out that formating datetimes in PowerShell using

get-date -format s

was the way to go.

Also, when pushing data using invoke-restmethod from PowerShell we also saw some errors until we added

-contenttype "application/json"

as a parameter (I would think PowerShell would be smart enough to do that on it’s own, but that’s just me.

-Using autodiscover with EWS takes a long time! This Exchange service uses redirects, which easily takes up to 30 seconds. This slowed the scripts down considerably, so we built in support for both autodiscover and “hard-coded” ews address into the ews module functions.

It also paid off to separate the code into several modules. We built an “ews” module to handle all communication with the exchange web service, and another “hmail” for talking to hmail through COM. The rest was put in a separate “mailflowtest” module which referenced the two first ones.
2013-07-09 12_49_58-JHSVMOM01 - Remote Desktop Connection Manager v2.2

Finally, we added some monitoring in ops manager to check that the Azure web site is up and running at all times, and that the tests returned from the webservice are always fresh. From conception to finished in about 10 days of work using PowerShell, System Center, Visual Studio and a little bit of cloud computing magic. Fun!

Notes on System Center Virtual Machine Manager 2012R2 Preview

I’ve upgraded my lab to the preview versions of Windows Server 2012R2 and VMM 2012R2. Here are a couple of notes on VMM 2012R2 Preview so far:

All in all, it’s very similar to VMM 2012 SP1, as expected. One menu item that stands out is the addition of the “Network Service” node in Fabric–>Networking. This is where you’ll add gateway devices and IPAM servers. It is also possible to add a Windows Server computer as a Gateway devices, functionality that was not available publicly in SP1.
2013-06-26 22_14_29-FABR-13.fabric.local - Remote Desktop Connection Manager v2.2

It’s also now possible to perform bare-metal deployment of not only Hyper-V hosts but scale-out file server clusters directly from within VMM.

Windows Server 2012R2 also brings a new “Generation 2 VM” which boots from SCSI (as opposed to IDE) natively. This enables online resizing of VM disks, which is a very welcome improvement. The same G2 VMs also enable PXE boot from the native virtual NIC, which means that there’s no need for legacy NICs anymore.

Interestingly, I couldn’t find a way to choose to create a G1 or G2 VM from VMM, and the default options still include IDE-based disks – which leads me to believe that the VMs created by VMM are G1 VMs. The Hyper-V console, on the other hand, is able to create G2 VMs:
2013-06-26 22_28_18-HVN02 - Remote Desktop Connection Manager v2.2

The new settings for Live Migration compression and SMB transport can also be controlled from within VMM, and compression gives a noticeable burst in Live Migration speed, even on my small lab vms. Nice stuff.
2013-06-26 22_31_59-FABR-13.fabric.local - Remote Desktop Connection Manager v2.2

So, those are my initial notes on VMM 2012R2. Nothing ground-breaking, just a nice evolution over the SP1 version. Just the way we like it :-)

Simplifying the “Create new VM” PowerShell scripts

If you’re at all interested in automating the tasks you perform in the VMM 2012 SP1 GUI, you probably appreciate the “View Script” button as much as I do. However, even if you have your VM Templates all setup and ready to go, and basically just “Next” through the “New VM” wizard, the generated script ends up being 57(!) lines long (including whitespace). It’s butt-ugly, quite frankly.

So, I wanted to find out how little code could be used to successfully create a VM based on a template. First stop: PowerShell ISE.

If you have the VMM console installed on the same machine as your ISE, you can simply load the VirtualMachineManager modules in the ISE console to have a look at them:

Get-Module -ListAvailable Virtualmach* | Import-Module

From there, you can view the CmdLets in the built-in help. You need to hit “Refresh” first:
2013-06-06 16_11_11-Windows PowerShell ISE
These “tabs”, like “NewStoredVmFromHardwareProfile” are so-called parameter sets. They are used in CmdLets to make sure that you pass the required parameters to that CmdLet. For instance, if you pass parameter “foo”, you also need to pass parameter “bar”. On the other hand, if you pass parameter “flunk” then that’s all you need. So, these parameter sets represent groups of parameters that you can pass the CmdLet. More on Parameter Sets can be found here.

Anyway, looking at the one called NewVmFromVmConfig seems to be the “cheapest” one to use, it only requires two parameters (the ones with a star):
2013-06-06 16_14_35-Windows PowerShell ISE

This is also the one used in the script VMM spits out when you hit the “View Script” button. The “Name” parameter should be fairly straight forward. The “VMConfiguration” parameter less so. VMConfiguration basically represents the config of the VM you wish to create (duh), and you can create it from a VM Template.

If you have a template with everything in it (run as accounts, join domain details, product keys and what not), the VMConfiguration part is simple. So, from the 57 lines og ugliness of the generated VMM script, I ended up with a function which is really only 6 lines of code. Mind you, this will only work if your template is already configured with all the settings VMM expects to find to be able to fire up your new VM.

Here’s the function I ended up with:

Function Deploy-SCVMSimple
    {
        [CmdletBinding()]
        Param(
            [String]$VMName,
            [String]$cloud,
            [String]$Template
            )

        $TemplateObj = Get-SCVMTemplate -All | where { $_.Name -eq $Template }
        $virtualMachineConfiguration = New-SCVMConfiguration -VMTemplate $TemplateObj -Name $VMName
        $cloudObj = Get-SCCloud -Name $cloud

        Write-Verbose "Creating VM $VMName in cloud $cloud"
        New-SCVirtualMachine -Name $VMName -VMConfiguration $virtualMachineConfiguration -Cloud $cloudObj -Computername $VMName| out-null

        #return object
        Get-SCVirtualMachine -name $VMName
    }

And you can run it like this:

Deploy-SCVMSimple -VMName "TestComputer9" -cloud "VDI Computers" -Template "VDI Template V2" -Verbose

Quite a nice improvement!
Oh, by the way: This script does not do any validation of the parameters you pass it, so expect bad things to happen if you start using this in production as-is. It is only meant as an excercise.

Getting SCOM ready for the big screen

When I worked as a consultant, the number one request from customers running System Center Operations Manager was “We need to get SCOM alerts on our wall screens”. It is easier said than done. I used to reply this to my customers: “SCOM isn’t a wall-type product. SCOM is a system that looks for every mistake you’ve done on every system in operation, and it WILL find them. Consider it a tool to help you troubleshoot and maintain your systems, not a helpdesk-info type tool”. The notion of using a monitoring tool as something more than just presenting alerts on a screen is unusual to most folks, but still – that’s what SCOM is. An exremely powerful tool that should live on your second (or third) screen. It’s not meant to be a static “here’s your info”-deal, it’s meant to be tweaked and clicked and sliced and diced. Simply, SCOM is a product for work, not watch.

That said, it would be nice to get the most critical alerts up on the big screen, so that helpdesk can be made aware that something is cooking with this and that server and what not – for example if a SAN or Exchange or something super-critical goes down or has issues. Built-in to SCOM is the idea of “Alert Severity” and “Alert Importance”. You can use these to “bump down” and “bump up” various alerts to indicate what’s important and what’s not. Before you start, let me tell you: You’ll spend weeks bumping down every critical alert to warning, and every high priority to medium. Which might make sense for that wall-screen idea, but it reduces the overall value of SCOM, because you’ll tailoring your alert levels AWAY from the folks that really need them – the admins and IT Pros that need them to quickly get an idea of what’s going on. So, that path is a dark one. Be warned.

But, there is a solution, and that is to abscract away that whole “wall screen” classification from the core alert properties. This weekend, Microsoft posted a tool called the “Update Alert Connector”, which scans new alerts in real-time and uses rules to decide wether or not to “tag” these alerts with extra info. This could for instance be a custom property (a SCOM alert has 10 of these, and the recommendation is to avoid using 5 through 10, since the Exchange Correlation engine might use them). From there, you can create custom views that show only open alerts with some predefined text in Custom Property 3. Right now you shold be going “big deal, you can use System Center Orchestrator” for that, and that’s true. However, the Alert Update Connector has a very nicely laid-out ruleset in the form of an xml file which is easy to manipulate using the built-in config tool and also PowerShell. Or any way you want, really. Also, by running the connector on the Management Server itself I’m reducing the risk of stuff going wrong. Me and Orchestrator aren’t the best of buds, as you might have read.

So that was easy.

The next challenge for us was to make an easy solution for deciding which alerts get sent to the wall screen. The idea is that an operator or admin can scan through closed alerts and go “Oh! That’s important! We sure want to get that one on the screen next time it happens!” and then just mark the alert and click a button (or fire a console task, in scom terms). The Alert Update Connector config UI kinda does this. But it requires you to open a tool, load a file, select the params you want to set, save the file and bump the connector service for the new rules to take effect. Cumbersome. With some PowerShell scripting magic, and a little bit of help from James Brundage’s excellent PipeWorks module we did the following:

1. Created a console task in SCOM that takes the ID of the selected alert
2. The task fires a powershell script that does the following:
2.1: loads the connectors xml config file from a predefined file share
2.2: Gets the alert ID, and queries a web service which returns the alert’s monitoring rule (the rule/monitor that fired the alert)
2.3: updates the xml file with this info and saves it
2.4: Bumps the connector service using WMI

Now you ask: but why not use “regular” PSRemoting or just load the OperationsManager module in step 2.2? Well, I wanted to make sure this task works quickly. PSRemoting is great, but over a WAN it can be slow. It can even time out (especially on PowerShell 2.0 – i’m hoping PS3.0 is more stable here). So instead, we used PipeWorks to create a web service. In terms of speed, the remoting version of the script took about one minute to run, whereas the WebService-version uses around 5 seconds. Well worth the extra “hop”. An alternative to this would be to just load the local OpsMgr powershell module, but again – I don’t know at runtime wether it’s installed or not.

As for WMI in step 2.4, we need to make sure the script runs successfully whether it’s run on the SCOM Management Server or not. WMI works consistently across computers.

Since we’re using Omnivex for our wall screens, The console task is simply called “Include Alert on Omnivex Display” and it works great. And I even learnt a thing or two about XML manipulation using PowerShell. Win-win!

I have not built any logic into checking if a rule already exists in the config file, which I really should. I also need to create some logic for removing rules from the xml file, probably in the form of another similar console task.

In case you’re wondering how the PipeWorks module helped us, Stefan Stranger has already written an incredibly good blog post about it here: http://blogs.technet.com/b/stefan_stranger/archive/2012/05/29/creating-a-system-center-2012-operations-manager-alert-rss-feed.aspx. It basically transforms a PowerShell function into a webservice (try “asxml” instead of “asrss” in Stefan’s example). On Windows Server 2012 this is what we’ll use Management OData for, but until then, the Pipeworks version works wonders (more on OData services here: http://csharpening.net/?p=1141)

So, that’s it. There’s no big scripty magic to any of the stuff I’ve done here, so I don’t think I’ll bother posting any code, but let me know if you want me to. I’m just a bit lazy at the moment.