HAProxy Load Balancing IIS with Sticky Session and SSL

HAProxy is a very good candidate for load balancing in a web cluster with high availability, even for Windows IIS servers! In its newer versions (1.5.x), HAProxy supports native SSL which makes it suitable for even enterprise level web applications with high traffic. It also supports sticky session which is useful when no session management is implemented. I know that the best option is to use centralized session management out of the box, but considering the fact that this central session manager will be point of failure (at least in IIS) and needs care, sticky session can be a good choice for some small to medium environments with short aged session applications.

Here, I will show how to configure HAProxy 1.5.x to support backend IIS servers with SSL (https) and sticky sessions.

– If you have IIS certificate, export it and use ‘openssl’ in Linux to convert it to appropriate format and put it in a protected directory.

– For SSL termination (HAProxy sends certificate to the users and takes over https protocol between user and load balancer), configurations is as follows:

  • frontend https-in
    bind *:443 ssl crt /etc/ssl/private/company.com.pem
    reqadd X-Forwarded-Proto:\ https
    default_backend application-backend

– To deploy sticky session, specify ’round robin’ as balancing policy and configure backend cluster part as follows. the key line is ‘cookie SERVERID insert indirect’:

  • backend application-backend
    balance roundrobin
    option httpclose
    option forwardfor
    cookie SERVERID insert indirect nocache
    server WEB-001 192.168.x.1:80 cookie A check
    server WEB-002192.168.x.2:80 cookie B check
    server WEB-003 192.168.x.3:80 cookie C check

To have more information about different policies and different session behaviours, read here.

Advertisements

Mixing 802.1Q and 802.1ad in Linux

When it comes to networking, Linux kernel is really superior over Windows. Some will ask why? Apart from performance point of view, there are some great features in Linux that can not be deployed in Windows easily. To give an example, let’s think about 2 important features: support for VLAN and trunking (802.1q) and NIC teaming or Link aggregation (802.1ad).

As far as I know, Windows kernel doesn’t support 802.1q and it all depends on NIC driver and for 802.1ad Windows support starts from Windows 2012 which means it’s too young! and who knows how it works! but both are prolonged features in Linux kernel.

And these features are really useful; for example when one single computer needs to be part of different VLAN’s it needs to be connected to a trunk port on the switch; therefore should understand VLAN tags and decapsulate packets. This single computer can even act as a router between different VLAN segments. Connecting to different VLANs means more traffic, so it’s not a bad idea to double (as an example) its bandwidth by aggregating (bonding) two NIC’s to improve performance. I’m providing 2 links to show how to implement 802.1q and 802.1ad in a single Linux machine with 2 or more NIC’s:

And to have an idea about combining these 2 features, see:

NAT in Fenced vApps (vCloud Director)

An interesting feature in vCloud Director networking is the capability of creating a fenced vApp. Basically, it’s like having an extra  (in case you have one for Organization network which means routed) vShield router and firewall on the edge of vApp.

One of the coolest applications for fenced vApps is when you want to have identical machines (same IP and MAC) in your vDC; it means when you want to do a fast clone without customizing guest OS by changing IP’s and names, … In this case vApps are completely isolated while they can have connection to External networks or perhaps internet! See here for a how-to about creating fenced vApp.

After you created a fenced vApp, you will notice that the IP addresses in the vApp are in the same subnet with Organization Network (see the picture above), although a NAT gateway is operating between the vApp and Organization network. So when you want to do a DNAT (Destination NAT), there are 2 places you should configure. In the picture above, suppose you want to give access to a VM with IP 192.168.0.45 in Fenced vApp from External Network. Assume that Edge 1 got IP 192.168.0.3 (specified while fencing). First, you need to create appropriate rules in Edge Gateway of Organization Network, Edge 2 (if there is any) to NAT and open ports for the IP address of Edge 1 (192.168.0.3)

fenced1

Next step, you need to do NAT and open ports from Edge 1 to specific VM but this configuration is not in Edge Gateways of vDC (unlike Edge 2) but can be found in Networking Tab of the vApp itself.
Click on the vApp, go to Networking tab,

fenced2

right click on the selected network and choose ‘Configure Services’. there, you can define appropriate NAT and firewall rules.

fenced3

 

Upgrade Distributed vSwitch from 5.1 to 5.5

When you upgrade your VMware environment to version 5.5. remember to upgrade your distributed vSwitch as well; it will not be done automatically. In this way, you can utilize new features in dvSwitch 5.5, including:

The upgrade process is fairly easy and the good thing is that according to VMware documentation, it is non-disruptive which means there is no outage and no host and VM will get down or experience issues. Find your distributed vSwitch either in vSphere Client or Web client, right click and do upgrade.

vCloud Public Console Proxy IP Address

You may have noticed that vCloud Director uses 2 important IP addresses to provide public access to tenants/users. One is the well-known front-end VCD IP address which is access to web portal for managing the organization vDC (also known as HTTP access) and second one provides remote access to virtual console of VM which is in fact resided on ESXi server cluster (known as VRMC access), this latter one is sort of more back-end because it’s coming from ESXi server which never should be exposed to public! So, vCloud Director actually tunnels Remote Console communications between ESXi servers and users through a proxy agent on port 443. Apparently, the proxy service runs on vCloud Director machine. That’s why an extra IP is needed on vCloud Director. This IP address is also specified in initial setup but it can be changed later (of course everything can be changed!).

So, when you want to open up vCloud Director for public users, you should pay enough attention to VRMC IP address and port. If you  have to do NAT through your firewall you should specify a different IP for VRMC and introduce the public IP/URL to vCloud Director in administration web panel. See the picture below:

Also, port 443 should be opened for this public IP on the firewall.

If you need more information about publicizing the whole vCloud Director, I found this excellent blog post about this topic, although it’s very useful for a general architecture of vCD deployment:

Router!

I don’t know how it has started, but I think at the moment ‘Router’ is the most misunderstood term in networking! People, even some technicians use it in wrong places. Yesterday I had a discussion with a technician who insisted to get a gateway/router IP address to do some local communication. When I asked him why you need it? I heard irrelevant explanations!  In this case, it turned out that he needed a DHCP server! but in general many think that Router/Gateway is a mandatory device in networking! while rarely they think of switch! Maybe we should blame Wireless AP producers!

Using NLB in VMware Environment

It’s very interesting that sometimes things work not in a way you expect. Well, it happens a lot in computer networking! By the way, using Microsoft Network Load Balancing in a VMware environment is one of them. In specific, when you intend to deploy Microsoft NLB in Unicast mode, you should be cautious. The reason for NLB not to work properly is well explained in the following Link:

Microsoft NLB not working properly in Unicast mode

In brief, NLB mechanism is based on hiding common, shared MAC address it assigns to all involved hosts from switch (by special kind of encapsulation I suppose) but ESX/ESXi hosts expose this common MAC address in certain conditions that will enable switch to learn the location and sends all the traffic to that specific port (ESX/ESXi host) which is against purpose of load balancer! There is a work around though which is suggested in the link above.