Category Archives: System Center

My little Virtual Machine Manager Add-In: VMM Extensions

I really like VMM 2012R2. I like the console. I like the PowerShell support, although it can err on the complex side for some operations (like, creating a VM). However, there is some day-to-day stuff that’s simply missing from the GUI. I decided to try and create a small Add-In that would mitigate some of these shortcomings.

Information like a VM’s ip address, it’s path on the filesystem and whether or not it has an iso mounted is helpful info to have when you’re dealing with managing tons of VMs using VMM. If you’re a PowerShell user all that info is in there, it’s just not exposed in a very nice way in the VMM console. So, my add-in does the following:

1. It sets up a few custom properties for VMs: VMPath, ISO and IP Address (if they’re missing) – this actually happens the first time you click the button.

2. Then, when you click the “Get VM Paths” button it will retrieve that information and update the custom properties on that VM. All you have to do is to add those fields to your default vm view and all should be good. Here’s how it should look:
And here’s the button to click in order to update a VM’s info:
The button supports selecting multiple VMs.

BTW: We actually have an SMA job which updates this info for every single VM in our environment, so that we only need to hit the button if we suspect that the information is out-dated.

Also note that the IP address shown is the first ip address from the first NIC as reported by the get-scvirtualnetadapter cmdlet.

Download the zip file here, and just head to Settings–>Console Add-ins in your VMM console to install it.

The entire solution if free for you to download if you want to create something similar on your own:





SMA-CI: Continous Integration adventures with System Center Service Management Automation -part 2

This is part 2. Part 1 is here:

So, to pick up where I left off: I wanted to see if it’s possible to add a bit of continous integration to the process of managing SMA runbooks. SMA ships with PowerShell cmdlets for importing, exporting and other activities, so it shouldn’t be hard.

As far as version control systems go, the basic rule is; if it’s in the file system, you can version it. Which means that all I need is to come up with a way of taking my runbooks on file and pushing them to SMA in a controlled fashion (I’ll get into the VSC part later).

This is the folder structure I decided on. The “root” folder contains a folder for each of the SMA artifact types, such as runbooks, variables, schedules and so on. Right now, I’ve only implemented runbooks.

Then, I built a simple script for pushing these runbook files into SMA. If the runbook already exists in SMA, it will do a check to see if they are identical. If they are, SMA is already running the latest version of the runbook – and if not, the script will push the updated version to SMA. Here’s the code for that script:

Also, I added a config json file to hold a list of all my SMA environments. The idea is that you have at least two separate SMA installs, one for testing and one production. The json file goes in the same folder as the SMA-CI-Import script, and consists of:
“EnvironmentName”:  “Default”,
“EnvironmentType”:  “Production”,
“WebServiceUrl”:  “

This lets me simply reference an SMA install by name instead of having to remember the web service URL. You get it.

So, all this does is that it allows me to add a new runbook script to the runbooks folder, and then run the SMA-ci-import script using something like:

C:\Scripts\SMA-CI-Control\SMA-ci-import.ps1 -RootFolder Scriptrootfolder
-and all changes are pushed straight into SMA and published (note that there are several parameters I’m not using here).

Now that we have a working script for pushing changes from the filesystem into SMA, the rest is relatively easy. First, we need to create a Git repo, which will contain the script root folder. The idea is that every developer on your team has an instance of this git repo on their workstation. If you’re new to Git, the GitHub for Windows kind of makes all the pain go away. I’m using Atlassian’s SourceTree myself. Still, the best thing for learning is to use Git for the cmd window or from PowerShell to get a feel for it. It’s really not hard to do basic stuff like committing and pushing and pulling.

So, at this point you have a working folder and if you’ve setup git correctly (I won’t go into details about this, as there are 1000′s of online tutorials to get started), any change you commit and push locally ends up in the cloud somewhere – here’s my example rootfolder inside GitHub:

The next part is to have some kind of magic automatically update SMA whenever you’ve done a commit. Bear in mind tho – this isn’t the way to do it in prod. You don’t want every random commit to be saved into SMA directly. You would probably implement branching inside your Git repo, and possibly implement some kind of code review process in addition to automated testing. I’m just goofing around here, so don’t do it in prod the way I do it in my lab, mkay?

Anyway, what we need is a CI server. Jetbrains TeamCity is a good choice, as it has native support for executing PowerShell and it runs on Windows Server 2012R2 without a fuss. You can install this on the SMA server or on a separate server (install it on a separate server). Whatever you do, just be sure to install the SMA PowerShell cmdlets on that server, as TeamCity is gonna need those.

After nexting through the TeamCity install (again, there are 1000′s of blogs detailing the product), set up a new project. I called mine “smatest” and set up a “build” for it (I should really call it “deploy”, not “build”. But whatever. This can be done almost automatically simply by pointing to the GitHub repo you created earlier.

So, the idea is that TeamCity polls the github repo for updates every so often. If there are any, it will download them (again, using Git), and invoke the script I wrote earlier against the temporary folder where the git repo is downloaded to – which will in turn push those changes to SMA.

As you can see in the screenshot, I have a very simple deploy pipeline defined. One VCS setting (that’s the github repo details), one trigger and a build step which is running my PowerShell file.

The trigger will simply kick off a build (or deploy, really) every time a new commit is made on the Github repo.

The PowerShell script is defined as such:

The rootfolder argument (you only see the % sign in the picture) is – which is an internal TeamCity placeholder for the folder where the git repo is downloaded to.

So, lets see this guy in action. First, I add a new script to the repo folder on my laptop:

Switching to SourceTree (or Github for Windows), the added file is picked up. I add it to the repo (this is how I tell Git to track this file), and commit+push the changes to my repo in the cloud:

After a few seconds, SourceTree has pushed the commit:

After max 60 seconds, my TeamCity server should pick up the change and kick off its build process:

And lo and behold, it executed successfully:

And finally, my WAP console shows that SMA received the new runbook:

Now, this might seem like an overly complex solution to a fairly simple problem. Also, notice that I didnt include any testing in this process – a proper setup would be to have TeamCity first deploy the scripts in a test environment and actually kick off any changed runbooks before verifying the outcome. Only then would the new/updated runbook(s) be deployed to the production SMA. Many folks are also using code review processes based around tools such as gerrit in which one or two fellow developers need to sign off on (or like) your code before the CI server will deploy it.

Ayways. This is how Software development works. Everything belongs in a repo, be it TFS or git or mercurial or something I’ve never heard of. Stuff is testet, controlled, reviewed and most importantly – the whole pipeline is automated where possible. So that you can focus on writing great automation scripts with the certainty that problems can be eliminated as quickly as they were introduced.

SMA-CI: Continous Integration adventures with System Center Service Management Automation -part 1

This is part 1. Part 2 is here:

If you’ve read this blog for a while, you know that I’m not a huge fan of the “original” System Center Orchestrator (Opalis) product. Its promise is sound, but the execution leaves a bit to be desired. Actually, a colleague asked me the other day during a lunch discussion: “Trond, do you hate System Center?”. I told him no, but it’s definetely a product suite which has some excellent bits, and some that are not so excellent.

In any case, fast-forward to today and Microsoft’s newest addition to the relative chaos that is System Center; Service Management Automation (SMA). It’s installed from the Orchestrator media, although it doesn’t share any code with Orchestrator. I guess they had to put it somewhere, and there was room on the CD.

SMA can be used as a stand-alone product with not much required besides a SQL Server database, but most folks plug it into the Windows Azure Pack (WAP), which serves as the “gui” for SMA. If you want it – you don’t have to.

When it all comes down to it, SMA is a repository for automation runbooks in the form of PowerShell workflows. it also includes a library of “resources” where you can store “global” stuff such as credentials, variables and schedules. These can be used by any of your runbooks. I like this idea.

Here’s the deal: The end of wire for all this, is to try and run our datacenters by the notion of infrastructure as code. This is why we implement PowerShell scripts. Repeatability. Self-documentation. Version control. That version control part is what we as Windows IT Pros historically have not been particularly good at. The reason is logical enough; what we used to do couldn’t easily be put into version control. We didn’t deal with code that could be checked in or out or committed, we dealt with wizards and mouse-clicks. Configuration used to be thousands of screenshots in a word document.

Times are changing though, and its good. The challenge with SMA as it sits in the current 2012R2 version, is that it doesn’t lean itself very well to version control. For each Runbook there is the notion of a “published” version and a “draft” version. A draft version can be saved/edited without disturbing the published version, and when everything’s good and ready, it can be “promoted” or “published” to become the new published version.

At first glimpse this may look like some kind of simple version control, but there’s an important piece missing: The ability to roll back. There is no going back – once a draft has been published, the previous published version simply disappears.

if you plan to use SMA to host 10-20 runbooks, this is not a huge deal - the small number of runbooks should be manageable. However, imagine 100. Or 1000. It is just a matter of time before someone makes a mistake and introduces something fatal into the mix. Which is why you need to be able to roll back. Which, in turn, is part of the reason we need version control.

I won’t bore you with the specifics, but look at a typical workflow a developer works by:

Develop (change) –> Unit test (does the code work) –> Integration test (does the code work together with other code) –>deploy (to test or production)

In SMA, you kinda have the ability to (manually) test your code in draft version before hitting publish, so the workflow would be something like:
Develop (change) –> Test –> Publish

The problem, of course, comes when its time to roll back (undo) a change. In SMA, the workflow for this would be:

? –> Darn.

So, with all this in mind I set out to try and bridge this obvious gap between “true” infrastructure as code and SMA. And since you’ve probably stopped reading this rant ages ago, I’ll stop here and jump to part 2.

Measuring Exchange mailflow performance using PowerShell, SCOM, Azure VMs and MVC

Ouch, another way-too-long blog title. Anyways. I’m doing sort of a summer job for my previous employer while waiting for my next project to start. Being a hosted service provider, they’re looking for ways to expose key performance metrics to their customers. For the hosted Exchange service, we looked at using the built-in reports for Exchange 2010 in System Center Operations Manager 2012SP1, which they’re using to monitor the Exchange environment. However, we found that either the reports don’t deliver any metrics the regular customer will understand, or the reports simply do not work (this seems to be a known issue with the Exchange 2010 MP for Ops Manager).

After some healthy discussion, we decidede to keep it simple: Measure the average speed of mail flow from the Exchange service and out to the Internet, and back.

The old Exchange 2003 management pack for Ops Manager actually included functionality along these lines, but the newer versions don’t.

So, here’s what we did: We deployed an Azure VM and installed the hmail server (which is freeware) on it. hMail is a simple-to-use mail server with an excellent com api which makes it really easy to access using PowerShell. The “plumbing” itself was done in PowerShell, and implemented using regular scheduled tasks on the Azure VM. Simply put, the scripts to the following:

1. Retrieve a list of test mailboxes (these reside on the on-prem Exchange servers)
2. For each testmailbox, log on to the mailbox (through autodiscover) using EWS and send a mail to the hmail server in azure. Each mail is stamped with a guid.
3. After mail is sent, upload the details (mailbox, time sent, logon time, guid) to a database using REST calls to the MVC Web API site
4. List all emails in the hmail inbox and map each mail to the correct “test” already in the MVC Web API database (the guid is used for mapping).
5. Update the “test” with received time

A custom SCOM management pack using PowerShell-based rules and monitors are used to discover the test mailboxes and retrieve stats for each test mailbox. Thanks to my “weather forecasting” management pack we already had the basic skeleton of a PowerShell-driven management pack in place. A quick search+replace on object names and then some editing in the scom 2007R2 authoring console was all it took (except for creating new PowerShell scripts for discovery, of course).

I won’t paste any code here since this was a paid task and the MP and code is basically the property of my employer, but you get the idea. However, I’ll post a few gotcha’s:

1. The MVC4 Web API implementation of its rest interface throws some troubles around date handling. Might be because we throw nb-no locale’d dates at it, while the azure web site runs on a server with en-us locale, but it turned out that formating datetimes in PowerShell using

get-date -format s

was the way to go.

Also, when pushing data using invoke-restmethod from PowerShell we also saw some errors until we added

-contenttype "application/json"

as a parameter (I would think PowerShell would be smart enough to do that on it’s own, but that’s just me.

-Using autodiscover with EWS takes a long time! This Exchange service uses redirects, which easily takes up to 30 seconds. This slowed the scripts down considerably, so we built in support for both autodiscover and “hard-coded” ews address into the ews module functions.

It also paid off to separate the code into several modules. We built an “ews” module to handle all communication with the exchange web service, and another “hmail” for talking to hmail through COM. The rest was put in a separate “mailflowtest” module which referenced the two first ones.
2013-07-09 12_49_58-JHSVMOM01 - Remote Desktop Connection Manager v2.2

Finally, we added some monitoring in ops manager to check that the Azure web site is up and running at all times, and that the tests returned from the webservice are always fresh. From conception to finished in about 10 days of work using PowerShell, System Center, Visual Studio and a little bit of cloud computing magic. Fun!

Notes on System Center Virtual Machine Manager 2012R2 Preview

I’ve upgraded my lab to the preview versions of Windows Server 2012R2 and VMM 2012R2. Here are a couple of notes on VMM 2012R2 Preview so far:

All in all, it’s very similar to VMM 2012 SP1, as expected. One menu item that stands out is the addition of the “Network Service” node in Fabric–>Networking. This is where you’ll add gateway devices and IPAM servers. It is also possible to add a Windows Server computer as a Gateway devices, functionality that was not available publicly in SP1.
2013-06-26 22_14_29-FABR-13.fabric.local - Remote Desktop Connection Manager v2.2

It’s also now possible to perform bare-metal deployment of not only Hyper-V hosts but scale-out file server clusters directly from within VMM.

Windows Server 2012R2 also brings a new “Generation 2 VM” which boots from SCSI (as opposed to IDE) natively. This enables online resizing of VM disks, which is a very welcome improvement. The same G2 VMs also enable PXE boot from the native virtual NIC, which means that there’s no need for legacy NICs anymore.

Interestingly, I couldn’t find a way to choose to create a G1 or G2 VM from VMM, and the default options still include IDE-based disks – which leads me to believe that the VMs created by VMM are G1 VMs. The Hyper-V console, on the other hand, is able to create G2 VMs:
2013-06-26 22_28_18-HVN02 - Remote Desktop Connection Manager v2.2

The new settings for Live Migration compression and SMB transport can also be controlled from within VMM, and compression gives a noticeable burst in Live Migration speed, even on my small lab vms. Nice stuff.
2013-06-26 22_31_59-FABR-13.fabric.local - Remote Desktop Connection Manager v2.2

So, those are my initial notes on VMM 2012R2. Nothing ground-breaking, just a nice evolution over the SP1 version. Just the way we like it :-)

Simplifying the “Create new VM” PowerShell scripts

If you’re at all interested in automating the tasks you perform in the VMM 2012 SP1 GUI, you probably appreciate the “View Script” button as much as I do. However, even if you have your VM Templates all setup and ready to go, and basically just “Next” through the “New VM” wizard, the generated script ends up being 57(!) lines long (including whitespace). It’s butt-ugly, quite frankly.

So, I wanted to find out how little code could be used to successfully create a VM based on a template. First stop: PowerShell ISE.

If you have the VMM console installed on the same machine as your ISE, you can simply load the VirtualMachineManager modules in the ISE console to have a look at them:

Get-Module -ListAvailable Virtualmach* | Import-Module

From there, you can view the CmdLets in the built-in help. You need to hit “Refresh” first:
2013-06-06 16_11_11-Windows PowerShell ISE
These “tabs”, like “NewStoredVmFromHardwareProfile” are so-called parameter sets. They are used in CmdLets to make sure that you pass the required parameters to that CmdLet. For instance, if you pass parameter “foo”, you also need to pass parameter “bar”. On the other hand, if you pass parameter “flunk” then that’s all you need. So, these parameter sets represent groups of parameters that you can pass the CmdLet. More on Parameter Sets can be found here.

Anyway, looking at the one called NewVmFromVmConfig seems to be the “cheapest” one to use, it only requires two parameters (the ones with a star):
2013-06-06 16_14_35-Windows PowerShell ISE

This is also the one used in the script VMM spits out when you hit the “View Script” button. The “Name” parameter should be fairly straight forward. The “VMConfiguration” parameter less so. VMConfiguration basically represents the config of the VM you wish to create (duh), and you can create it from a VM Template.

If you have a template with everything in it (run as accounts, join domain details, product keys and what not), the VMConfiguration part is simple. So, from the 57 lines og ugliness of the generated VMM script, I ended up with a function which is really only 6 lines of code. Mind you, this will only work if your template is already configured with all the settings VMM expects to find to be able to fire up your new VM.

Here’s the function I ended up with:

Function Deploy-SCVMSimple

        $TemplateObj = Get-SCVMTemplate -All | where { $_.Name -eq $Template }
        $virtualMachineConfiguration = New-SCVMConfiguration -VMTemplate $TemplateObj -Name $VMName
        $cloudObj = Get-SCCloud -Name $cloud

        Write-Verbose "Creating VM $VMName in cloud $cloud"
        New-SCVirtualMachine -Name $VMName -VMConfiguration $virtualMachineConfiguration -Cloud $cloudObj -Computername $VMName| out-null

        #return object
        Get-SCVirtualMachine -name $VMName

And you can run it like this:

Deploy-SCVMSimple -VMName "TestComputer9" -cloud "VDI Computers" -Template "VDI Template V2" -Verbose

Quite a nice improvement!
Oh, by the way: This script does not do any validation of the parameters you pass it, so expect bad things to happen if you start using this in production as-is. It is only meant as an excercise.

Getting SCOM ready for the big screen

When I worked as a consultant, the number one request from customers running System Center Operations Manager was “We need to get SCOM alerts on our wall screens”. It is easier said than done. I used to reply this to my customers: “SCOM isn’t a wall-type product. SCOM is a system that looks for every mistake you’ve done on every system in operation, and it WILL find them. Consider it a tool to help you troubleshoot and maintain your systems, not a helpdesk-info type tool”. The notion of using a monitoring tool as something more than just presenting alerts on a screen is unusual to most folks, but still – that’s what SCOM is. An exremely powerful tool that should live on your second (or third) screen. It’s not meant to be a static “here’s your info”-deal, it’s meant to be tweaked and clicked and sliced and diced. Simply, SCOM is a product for work, not watch.

That said, it would be nice to get the most critical alerts up on the big screen, so that helpdesk can be made aware that something is cooking with this and that server and what not – for example if a SAN or Exchange or something super-critical goes down or has issues. Built-in to SCOM is the idea of “Alert Severity” and “Alert Importance”. You can use these to “bump down” and “bump up” various alerts to indicate what’s important and what’s not. Before you start, let me tell you: You’ll spend weeks bumping down every critical alert to warning, and every high priority to medium. Which might make sense for that wall-screen idea, but it reduces the overall value of SCOM, because you’ll tailoring your alert levels AWAY from the folks that really need them – the admins and IT Pros that need them to quickly get an idea of what’s going on. So, that path is a dark one. Be warned.

But, there is a solution, and that is to abscract away that whole “wall screen” classification from the core alert properties. This weekend, Microsoft posted a tool called the “Update Alert Connector”, which scans new alerts in real-time and uses rules to decide wether or not to “tag” these alerts with extra info. This could for instance be a custom property (a SCOM alert has 10 of these, and the recommendation is to avoid using 5 through 10, since the Exchange Correlation engine might use them). From there, you can create custom views that show only open alerts with some predefined text in Custom Property 3. Right now you shold be going “big deal, you can use System Center Orchestrator” for that, and that’s true. However, the Alert Update Connector has a very nicely laid-out ruleset in the form of an xml file which is easy to manipulate using the built-in config tool and also PowerShell. Or any way you want, really. Also, by running the connector on the Management Server itself I’m reducing the risk of stuff going wrong. Me and Orchestrator aren’t the best of buds, as you might have read.

So that was easy.

The next challenge for us was to make an easy solution for deciding which alerts get sent to the wall screen. The idea is that an operator or admin can scan through closed alerts and go “Oh! That’s important! We sure want to get that one on the screen next time it happens!” and then just mark the alert and click a button (or fire a console task, in scom terms). The Alert Update Connector config UI kinda does this. But it requires you to open a tool, load a file, select the params you want to set, save the file and bump the connector service for the new rules to take effect. Cumbersome. With some PowerShell scripting magic, and a little bit of help from James Brundage’s excellent PipeWorks module we did the following:

1. Created a console task in SCOM that takes the ID of the selected alert
2. The task fires a powershell script that does the following:
2.1: loads the connectors xml config file from a predefined file share
2.2: Gets the alert ID, and queries a web service which returns the alert’s monitoring rule (the rule/monitor that fired the alert)
2.3: updates the xml file with this info and saves it
2.4: Bumps the connector service using WMI

Now you ask: but why not use “regular” PSRemoting or just load the OperationsManager module in step 2.2? Well, I wanted to make sure this task works quickly. PSRemoting is great, but over a WAN it can be slow. It can even time out (especially on PowerShell 2.0 – i’m hoping PS3.0 is more stable here). So instead, we used PipeWorks to create a web service. In terms of speed, the remoting version of the script took about one minute to run, whereas the WebService-version uses around 5 seconds. Well worth the extra “hop”. An alternative to this would be to just load the local OpsMgr powershell module, but again – I don’t know at runtime wether it’s installed or not.

As for WMI in step 2.4, we need to make sure the script runs successfully whether it’s run on the SCOM Management Server or not. WMI works consistently across computers.

Since we’re using Omnivex for our wall screens, The console task is simply called “Include Alert on Omnivex Display” and it works great. And I even learnt a thing or two about XML manipulation using PowerShell. Win-win!

I have not built any logic into checking if a rule already exists in the config file, which I really should. I also need to create some logic for removing rules from the xml file, probably in the form of another similar console task.

In case you’re wondering how the PipeWorks module helped us, Stefan Stranger has already written an incredibly good blog post about it here: It basically transforms a PowerShell function into a webservice (try “asxml” instead of “asrss” in Stefan’s example). On Windows Server 2012 this is what we’ll use Management OData for, but until then, the Pipeworks version works wonders (more on OData services here:

So, that’s it. There’s no big scripty magic to any of the stuff I’ve done here, so I don’t think I’ll bother posting any code, but let me know if you want me to. I’m just a bit lazy at the moment.

Regional Settings and Locale in System Center Service Manager 2012

If you’ve followed this blog for a while, you might remember the post I wrote when Operations Manager 2012 was in public beta (or was it RC?), and how the console was rendered mostly useless because of some bad choices the team made around display language. I won’t go into detail, it’s all here. Basically the bug was filed on Connect, Microsoft initially said “yeah, we’re not gonna fix that” and I literally exploded. Later I had a very interesting discussion with Vlad on the Operations Manager team about this bug and other rooms for improvement, and all were happy. However, if you watch closely you’ll still find some bi-linguality in the Ops manager web console, so the bug isn’t completely removed…

Anyways, fast-forward to today, where I’m tasked with (along with a gazillion other things) implementing System Center Service Manager 2012 in our org. As you may know, Service Manager and Operations Manager share some code, along with System Center Essentials which is why I’m not surprised when I find this in a Service Manager report:

Se those weird words in the “Status” column with those funny viking letters? That’s what Norwegian looks like, my friend. And while we’re very proud of our language and all that, most IT departments in Norway are simply going “English all the way”. The lingo is english, the expressions and abbrevations are english, I even comment in english in my PowerShell code. It just looks all that cleaner. This is not the case for all non-english countries though. Germany and France, for instance, are very true to their language and this is why Microsoft is supplying fully localized consoles of pretty much all their server products, such as Exchange or SQL Server. They do not, however, supply these in Norwegian and that’s pretty much fine by most Norwegian IT Pros. English is good enough.

Now, to the problem: Service Manager (as did Operations Manager before I yelled at Vlad) picks up on your computer’s regional setting, and NOT its language. So no matter how English your Windows install or Service Manager console is, as long as your locale is set to Norwegian, the classes displayed (and thereby reports) are displayed in Norwegian. This is also true for the Service Manager PowerShell CmdLets:

As long as the class has a localized name, it will display that. If not, it will display the English one. Which makes it all a big heap of bi-lingual mess, and completely useless.

Now, on the server-side we COULD switch the system locale to en-us and be done with. However, for the Data Warehouse in Service Manager the problem is a tad bigger. The product provides data for Excel, where numbers are crunched. And Excel lives on each user’s workstation, right? And no one in their right mind wants to start changing workstation locales just to make one product happy.

I don’t know if I’ll end up with having to supply a VDI farm of EN-cultured workstations to my operators or if I have to tell them to live with useless reports.

However, it’s my strong belief that Microsoft DID NOT TEST their System Center 2012 products in a non-English environment. And in the meantime I’m looking for that secret registry key that resets everything to English, although I’m fearing that it does not exists.

System Center Orchestrator 2012 frustrations

This is me, trying to design a Runbook in System Center Orchestrator. The “head against wall” should really be part of it’s logo.

Orchestrator, in case you don’t know it, is the product/company formerly known as Opalis, a company Microsoft bought a couple of years back. And the stuff you can do in Orchestrator, its pretty “wow”. The product is what Microsoft envisions we will use to create automated processes of pretty much whatever we want. Think of it as an advanced task scheduler, or “the place where you put all your operational processes” or anything in between. It it simply an incredibly powerful product.

In theory.

In practice, what can I say. I find myself constantly struggling with stuff I shouldn’t have to struggle with. Today I spent pretty much the entire day trying to get a simple runbook to work. This afternoon, I came up this this idea that if an Alert in Operations Manager gets assigned to someone, that person should get an email about it, and the Alert Status should get updated to “Assigned” so we can keep track of them. In my head, it was all so simple.

Okay, first stop: Get a list of all alerts which are NOT “Closed”, that  HAVE an owner (any text) but where CustomProperty1 is NOT set to “UpdatedByOrchestrator” or something. And there it stopped. How do I filter on “anything but empty”? (regex it turns out, which I’m not good at). I tried the “?” and I tried the “*”. And it worked, until one my alerts actually fitted that filter. That’s when the whole Runbook went all “A .NET Framework error occurred during execution of user-defined routine or aggregate ” on me. For all I know, this could be the Integration Pack author’s fault and not the Orchestrator guys.

Also, I find the included Activities often fall just short of giving me what I need, and I need to turn to the trusty blue-and-white (PowerShell, that is). Orchestrator, it seems,  hates PowerShell. “The host does not implement this function.” “You can’t do this”. “I’m sorry Dave. I can’t let you do that”. To top things out, Orchestrator runs in a x86 (and not x64 like the rest of the world) runspace, which is a problem if you need to use x64-only Modules and SnapIns. You pretty much have to remote back into “yourself” to transition from x86 to x64 which is just madness when you think about it. And don’t get me started on debugging, the so-called debugger (Runbook Tester) gives close to no useful information at all.

One of my colleague could actually tell I was working on some Orchestrator stuff today, just by looking at my facial expression:

I constantly find myself starting with a fresh idea of some value-adding I can do, and then 5 hours later I’m stuck, angry and frustrated. This is not what “Datacenter automation” should be. Far from it.

So, dear Microsoft. Here’s what I suggest you do:
-Orchestrator MUST embrace PowerShell. Give us a PROPERLY written and documented Activity for running PowerShell scripts.
-Work with the PowerShell team to give is a plugin or documentation or whatever on how to do good debugging
-The Integration Pack documentation on Technet is lacking examples. We need examples!
-Switch do 64-bit computing, like the rest of the world has done
-We need Orchestrator to be object-based, like PowerShell is, not text-based like it is today. We need Types and Arrays and some control!
-The Windows Event log is there for a reason. When a Runbook fails to run because of service account permissions or whatever, please tell us why.

That said, It’s pretty clear what the priorities were when Opalis was transformed into a part of the System Center 2012 suite: Get rid of the Java-based web console, and make sure the product is Web Services-enabled so that other systems (like Service Manager) can hook onto it. I’m not saying that was a wrong decision. I’m saying that Orchestrator as it sits today is a product with extreme room for improvement. Go do!

In the meantime, this is the best resource I’ve found on the internet for “debugging” PowerShell scripts in Orchestrator. Read the post and you’ll get my frustration. Debugging pretty much boils down to piping your scripts to a text file and trying to run those files….

WS2012: Live

Stuff is so fast-paced at the moment I pretty much just sleep, work, repeat. In all the chaos and change I wanted to make a note of today just to be able to save the date for reference.

We are live on Windows Server 2012. In production. Deployed. And although I’m eagerly awaiting at least a beta version of System Center 2012 SP1 (along with every other Windows Server admin I know), this is a pretty giant leap in itself. In the meantime, we’re running some somple WMI monitors from PRTG to make sure the systems are running as they should. I have not deployed a converged fabric. I have not deployed Cluster-Aware updating (because I don’t expect to have to reboot my servers more than once or twice per year). Because of proper design, our Exchange 2010 servers run better with 12GBs of RAM on Hyper-V than they did with 40GBs of ram physically. Live migration on a 10GB link lets me empty a host completely in under a minute. PowerShell lets me deploy disks and VMs consistenly and without error. Stuff works.

Is Windows Server 2012 perfect? No operating system is. Still, it is ready for the big leagues. By december, one of our two main VMware clusters will be replaced by Hyper-V, saving us tremendous amounts of money while retaining or increasing our management efficiency. And that’s what it all comes down to – money.

To Jeffrey Snover and the rest of the team: Thank you. This is golden stuff. Now, give us the management layer.