In the first post of the “Dealing with multiple Management Server” series I demonstrated how to install additional Management Server. This post takes the next steps and will guide you through the process of load balancing your Management Servers.
Dealing with multiple Management Servers series:
- Part 1/3: Install an additional Management Server
- Part 2/3: Load Balance Management Servers (this post)
- Part 3/3: Decommission old Management Servers
Step 1 – Load Balancing Overview
The initial SCSM Management Server executes all the workflows. Any additional Management Server has no workflows running and will be used for Console connections. Workflows cannot be load balanced, so when talking about load balancing in Service Manager it’s always about console connections. When you have many analysts working with Service Manager at the same time, it makes sense to separate the workflow load from the console load. So adding at least two more Management Servers (load balanced) for handling Console connections makes sense. This way you have the initial Management Server with all available resources for running workflows.
(Source: Service Manager 2012 Deployment Guide)
In smaller scenarios you can also load balance the initial Management Server with a second Management Server. This way you need fewer Servers, but some of the console connections will be handled by the initial Management Server that also runs the workflows.
(Source: Service Manager 2012 Deployment Guide)
The Load balancing setup procedure is the same for both scenarios. I selected the second scenario for this example.
Step 2 – Configure Microsoft Windows NLB
In this example I will use the Microsoft Windows Network Load Balancing (NLB) Feature but of course you can use other products like hardware load balancers (like F5 etc.). NLB works great for load balancing Management Servers as it’s delivered with the Operating System and therefore completely free, easy to implement and stable. For this scenario I selected a common two NIC approach with NLB Unicast mode. Since most of the server are virtualized today, adding a second NIC to a Service Manager Management Server is no big deal. No let’s check the configuration steps.
First I rename the two Network Connections.
Public NIC: this is the regular NIC where Connections will be made by console users and where the Management Server will communicate with other components like the Service Manager Database. The IP configuration is pretty standard.
Private NIC: this is the cluster-internal NIC for the heartbeats. This NIC needs some extra configuration.
- Unbind unneeded services and protocols
- Slim TCP/IPv4 configuration (no Gateway, no DNS, no WINS), same IP subnet/VLAN as Public NIC
- No DNS registration
- Disable NetBIOS oder TCP/IP and LMHosts Loockup
When all the IP configuration is up and running it’s now time to configure NLB. First I have to add the feature on both Management Servers by using Service Manager.
Now it’s time to start the NLB Manager and create the NLB cluster.
Select the NIC that will be part of the NLB cluster. This is the Public NIC. Then define the Host Identifier and select the dedicated IP address of the selected adapter (normally the adapter only has a single address).
Now define the virtual IP address of the NLB cluster. If needed you can add multiple addresses, but for this scenario we only need a single address.
Enter a FQDN for the NLB Cluster and select the operations mode – in this example Unicast mode. For more information about the available modes check this link.
Now configure the load balancing rules. Delete the default rule and add a new one to load balance all incoming Service Manager console connections (TCP 5724) to all members of the cluster.
Complete the wizard and wait until configuration is complete.
If you receive an error saying that the NIC is somehow misconfigured: when your systems are virtualized make sure your hypervisor allows MAC address spoofing.
Now it’s time to add the second Management Server to the NLB cluster (Add Host to Cluster).
The NLB cluster is now ready. Now some more configuration steps are needed to make sure the Service Manager Management Server cluster operates correctly.
Step 3 – Additional configurations
First of all we have to register the Cluster Name in DNS (New Host/A-Record).
After adding the new record make sure you can resolve the cluster name correctly.
To make sure Kerberos Authentication works correctly when connecting to the Cluster FQDN a SPN must be registered in ADDS. First check the already existing SPNs for the two Management Servers.
- setspn.exe –L domain\ServiceAccount
Now add two new SPNs’’.
- setspn.exe –A MSOMSdkSvc/ServernameNetBIOS –U domain\ServiceAccount
- setspn.exe –A MSOMSdkSvc/ServernameFQDN –U domain\ServiceAccount
Depending on your exact NLB scenario it may e needed to reconfigure the routing behavior. Because of the strong host model that Windows Server 2008 R2 is based on it’s possible that routing does not work as expected. In this case execute the following command on all load balanced Management Servers.
- netsh interface ipv4 set interface “Public NIC” forwarding=enabled
Step 4 – Test the solution
Start the console and try to connect to the cluster FQDN.
Great, we now have a load balanced Management Server infrastructure. If needed you can add more Management Servers later to scale out.
Happy load balancing!