In the first post of the “Dealing with multiple Management Server” series I demonstrated how to install additional Management Server. This post takes the next steps and will guide you through the process of load balancing your Management Servers.
Dealing with multiple Management Servers series:
- Part 1/3: Install an additional Management Server
- Part 2/3: Load Balance Management Servers (this post)
- Part 3/3: Decommission old Management Servers
Step 1 – Load Balancing Overview
The initial SCSM Management Server executes all the workflows. Any additional Management Server has no workflows running and will be used for Console connections. Workflows cannot be load balanced, so when talking about load balancing in Service Manager it’s always about console connections. When you have many analysts working with Service Manager at the same time, it makes sense to separate the workflow load from the console load. So adding at least two more Management Servers (load balanced) for handling Console connections makes sense. This way you have the initial Management Server with all available resources for running workflows.
(Source: Service Manager 2012 Deployment Guide)
In smaller scenarios you can also load balance the initial Management Server with a second Management Server. This way you need fewer Servers, but some of the console connections will be handled by the initial Management Server that also runs the workflows.
(Source: Service Manager 2012 Deployment Guide)
The Load balancing setup procedure is the same for both scenarios. I selected the second scenario for this example.
Step 2 – Configure Microsoft Windows NLB
In this example I will use the Microsoft Windows Network Load Balancing (NLB) Feature but of course you can use other products like hardware load balancers (like F5 etc.). NLB works great for load balancing Management Servers as it’s delivered with the Operating System and therefore completely free, easy to implement and stable. For this scenario I selected a common two NIC approach with NLB Unicast mode. Since most of the server are virtualized today, adding a second NIC to a Service Manager Management Server is no big deal. No let’s check the configuration steps.
First I rename the two Network Connections.
Public NIC: this is the regular NIC where Connections will be made by console users and where the Management Server will communicate with other components like the Service Manager Database. The IP configuration is pretty standard.
Private NIC: this is the cluster-internal NIC for the heartbeats. This NIC needs some extra configuration.
- Unbind unneeded services and protocols
- Slim TCP/IPv4 configuration (no Gateway, no DNS, no WINS), same IP subnet/VLAN as Public NIC
- No DNS registration
- Disable NetBIOS oder TCP/IP and LMHosts Loockup
When all the IP configuration is up and running it’s now time to configure NLB. First I have to add the feature on both Management Servers by using Service Manager.
Now it’s time to start the NLB Manager and create the NLB cluster.
Select the NIC that will be part of the NLB cluster. This is the Public NIC. Then define the Host Identifier and select the dedicated IP address of the selected adapter (normally the adapter only has a single address).
Now define the virtual IP address of the NLB cluster. If needed you can add multiple addresses, but for this scenario we only need a single address.
Enter a FQDN for the NLB Cluster and select the operations mode – in this example Unicast mode. For more information about the available modes check this link.
Now configure the load balancing rules. Delete the default rule and add a new one to load balance all incoming Service Manager console connections (TCP 5724) to all members of the cluster.
Complete the wizard and wait until configuration is complete.
If you receive an error saying that the NIC is somehow misconfigured: when your systems are virtualized make sure your hypervisor allows MAC address spoofing.
Now it’s time to add the second Management Server to the NLB cluster (Add Host to Cluster).
The NLB cluster is now ready. Now some more configuration steps are needed to make sure the Service Manager Management Server cluster operates correctly.
Step 3 – Additional configurations
First of all we have to register the Cluster Name in DNS (New Host/A-Record).
After adding the new record make sure you can resolve the cluster name correctly.
To make sure Kerberos Authentication works correctly when connecting to the Cluster FQDN a SPN must be registered in ADDS. First check the already existing SPNs for the two Management Servers.
- setspn.exe –L domain\ServiceAccount
Now add two new SPNs’’.
- setspn.exe –A MSOMSdkSvc/ServernameNetBIOS –U domain\ServiceAccount
- setspn.exe –A MSOMSdkSvc/ServernameFQDN –U domain\ServiceAccount
Depending on your exact NLB scenario it may e needed to reconfigure the routing behavior. Because of the strong host model that Windows Server 2008 R2 is based on it’s possible that routing does not work as expected. In this case execute the following command on all load balanced Management Servers.
- netsh interface ipv4 set interface “Public NIC” forwarding=enabled
Step 4 – Test the solution
Start the console and try to connect to the cluster FQDN.
Great, we now have a load balanced Management Server infrastructure. If needed you can add more Management Servers later to scale out.
Happy load balancing!
Marcel
Pingback: Dealing with multiple Management Servers 1/3: Install an additional Management Server | SCSMfaq.ch
Great walkthrough!
Pingback: Dealing with multiple Management Servers 3/3: Decommission old Management Servers | SCSMfaq.ch
Pingback: Dealing with multiple Management Servers 3/3: Decommission old Management Servers | SCSMfaq.ch
Is the Primary/Workflow server a part of the cluster in this scenario? If so, is there a way to configure the cluster to favor other management servers besides the primary? I don’t like the idea of console connections bogging down the primary. Great walkthrough though!
Hey Robert
In this example I used the Initial MS and a second MS to load balance, However, in a perfect world you would separate the Initial MS (Workflow Server) from the console connections. So you could have 3 Servers, the Initial MS(no Console connectîons) and 2 Load Balanced MS for the console connections.
Cheers
Marcel
Ok. In that case, when a user clicked on “registered servers” in the “connect to a management server” dialog box, they would see the cluster name as well as the primary correct? I wish there was a way to make the primary server not show up in that dialog box. Thanks!
Hey
You can send Users to the correct Cluster Name by delivering this information in the SCSM Console Package that you deliver. Then Users will not get the selection …
Another way would be deleting the SCP in ADDS. But I’m not sure if this harms the functionality of the product 🙂
regards
Marcel
Marcel,
Do you have any insight on setting up load balancing with a hardware load balancer? When doing so, I can’t seem to get the console to connect to the management servers via the HLB host name. Any thoughts?
Hey James
Hmmmm, I implemented a SCSM infrastructure 2 days ago with everything load balanced with a hardware load balancer – no issues! Did you register the SPN in ADDS?
Cheers
Marcel
I did but I may have registered them incorrectly. My AD team said that I can only register an SPN with the service account (SM data access and configuration account). So the entry is as follows:
MSOMSdkSvc/SCSMSERVER.fqdn.name
Where the SCSM server is the DNS entry host name assigned to the VIP of the HLB. Do I need to make some additional entries?
I now have SPN registered as shown below but I’m still unable to connect from the console. I”m able to connect directly to the mgmt servers without a problem. Thoughts?
MSOMSdkSvc/HLBSERVER
MSOMSdkSvc/HLBSERVER.fqdn.name
MSOMSdkSvc/ManagementSERVER01
MSOMSdkSvc/ManagementSERVER01.fqdn.name
MSOMSdkSvc/ManagementSERVER02
MSOMSdkSvc/ManagementSERVER02.fqdn.name
Hey
But you can resolve the Name “HLBServer” to the correct load balanced IP by using the FQDN? And the Correct Port TCP 5724 is load balanced?
Regards
Marcel
Yes. I can resolve the name to the load balanced IP and TCP port 5724 is load balanced.
Hi Marcel,
Great article! I have a question though. Say my workflow MS is on a different continent, but I have a secondary MS for the locals to login to. Since the connection between the 2 MS is done over the WAN, will there be performance impact when using the console? Like the console responding very slowly?
Hey
This is hard to say and depends on lots of things. but when talking about separating the Workflow Server and a regular MS, i don’t see why users should have bad performance. The problem normally appears when separating MS and DB. Myself, I have never done “real” investigation on this. Another way could be running all components together combined with RemoteApps.
Cheers
Marcel
Marcel is correct that there are multiple things involved in the performance you may experience in the scenario you described. Console clients in the same region as the secondary server may be able to make a better initial connection to the console, but in my mind the biggest factor in console performance will be the connection to the SQL database on the back-end. Is the SQL instance in a different continent as well?
the connection from MS -> SQL is far more important than Console->MS. I had considered making an additional MS for our second Datacenter but was advised by microsoft to avoid it as that would put the MS remote to the SQL and any Console->MS performance gains would be negated by the MS being distant from the SQL (across a lower speed WAN link – slower than the LAN speed that the other MS are connected at)
Hello,
after adding second server to the cluster RDP connection with first was lost, also I can’t connect to the server using scsm console but can connect using cluster host name, both servers have 2 nics, all did as desribed
Pingback: SCSM Management Servers in High Availability? | System Center Noise