HAProxy Load Balancing IIS with Sticky Session and SSL

HAProxy is a very good candidate for load balancing in a web cluster with high availability, even for Windows IIS servers! In its newer versions (1.5.x), HAProxy supports native SSL which makes it suitable for even enterprise level web applications with high traffic. It also supports sticky session which is useful when no session management is implemented. I know that the best option is to use centralized session management out of the box, but considering the fact that this central session manager will be point of failure (at least in IIS) and needs care, sticky session can be a good choice for some small to medium environments with short aged session applications.

Here, I will show how to configure HAProxy 1.5.x to support backend IIS servers with SSL (https) and sticky sessions.

– If you have IIS certificate, export it and use ‘openssl’ in Linux to convert it to appropriate format and put it in a protected directory.

– For SSL termination (HAProxy sends certificate to the users and takes over https protocol between user and load balancer), configurations is as follows:

  • frontend https-in
    bind *:443 ssl crt /etc/ssl/private/company.com.pem
    reqadd X-Forwarded-Proto:\ https
    default_backend application-backend

– To deploy sticky session, specify ’round robin’ as balancing policy and configure backend cluster part as follows. the key line is ‘cookie SERVERID insert indirect’:

  • backend application-backend
    balance roundrobin
    option httpclose
    option forwardfor
    cookie SERVERID insert indirect nocache
    server WEB-001 192.168.x.1:80 cookie A check
    server WEB-002192.168.x.2:80 cookie B check
    server WEB-003 192.168.x.3:80 cookie C check

To have more information about different policies and different session behaviours, read here.

Advertisement

Mixing 802.1Q and 802.1ad in Linux

When it comes to networking, Linux kernel is really superior over Windows. Some will ask why? Apart from performance point of view, there are some great features in Linux that can not be deployed in Windows easily. To give an example, let’s think about 2 important features: support for VLAN and trunking (802.1q) and NIC teaming or Link aggregation (802.1ad).

As far as I know, Windows kernel doesn’t support 802.1q and it all depends on NIC driver and for 802.1ad Windows support starts from Windows 2012 which means it’s too young! and who knows how it works! but both are prolonged features in Linux kernel.

And these features are really useful; for example when one single computer needs to be part of different VLAN’s it needs to be connected to a trunk port on the switch; therefore should understand VLAN tags and decapsulate packets. This single computer can even act as a router between different VLAN segments. Connecting to different VLANs means more traffic, so it’s not a bad idea to double (as an example) its bandwidth by aggregating (bonding) two NIC’s to improve performance. I’m providing 2 links to show how to implement 802.1q and 802.1ad in a single Linux machine with 2 or more NIC’s:

And to have an idea about combining these 2 features, see:

NAT in Fenced vApps (vCloud Director)

An interesting feature in vCloud Director networking is the capability of creating a fenced vApp. Basically, it’s like having an extra  (in case you have one for Organization network which means routed) vShield router and firewall on the edge of vApp.

One of the coolest applications for fenced vApps is when you want to have identical machines (same IP and MAC) in your vDC; it means when you want to do a fast clone without customizing guest OS by changing IP’s and names, … In this case vApps are completely isolated while they can have connection to External networks or perhaps internet! See here for a how-to about creating fenced vApp.

After you created a fenced vApp, you will notice that the IP addresses in the vApp are in the same subnet with Organization Network (see the picture above), although a NAT gateway is operating between the vApp and Organization network. So when you want to do a DNAT (Destination NAT), there are 2 places you should configure. In the picture above, suppose you want to give access to a VM with IP 192.168.0.45 in Fenced vApp from External Network. Assume that Edge 1 got IP 192.168.0.3 (specified while fencing). First, you need to create appropriate rules in Edge Gateway of Organization Network, Edge 2 (if there is any) to NAT and open ports for the IP address of Edge 1 (192.168.0.3)

fenced1

Next step, you need to do NAT and open ports from Edge 1 to specific VM but this configuration is not in Edge Gateways of vDC (unlike Edge 2) but can be found in Networking Tab of the vApp itself.
Click on the vApp, go to Networking tab,

fenced2

right click on the selected network and choose ‘Configure Services’. there, you can define appropriate NAT and firewall rules.

fenced3

 

Upgrade Distributed vSwitch from 5.1 to 5.5

When you upgrade your VMware environment to version 5.5. remember to upgrade your distributed vSwitch as well; it will not be done automatically. In this way, you can utilize new features in dvSwitch 5.5, including:

The upgrade process is fairly easy and the good thing is that according to VMware documentation, it is non-disruptive which means there is no outage and no host and VM will get down or experience issues. Find your distributed vSwitch either in vSphere Client or Web client, right click and do upgrade.

vCloud Public Console Proxy IP Address

You may have noticed that vCloud Director uses 2 important IP addresses to provide public access to tenants/users. One is the well-known front-end VCD IP address which is access to web portal for managing the organization vDC (also known as HTTP access) and second one provides remote access to virtual console of VM which is in fact resided on ESXi server cluster (known as VRMC access), this latter one is sort of more back-end because it’s coming from ESXi server which never should be exposed to public! So, vCloud Director actually tunnels Remote Console communications between ESXi servers and users through a proxy agent on port 443. Apparently, the proxy service runs on vCloud Director machine. That’s why an extra IP is needed on vCloud Director. This IP address is also specified in initial setup but it can be changed later (of course everything can be changed!).

So, when you want to open up vCloud Director for public users, you should pay enough attention to VRMC IP address and port. If you  have to do NAT through your firewall you should specify a different IP for VRMC and introduce the public IP/URL to vCloud Director in administration web panel. See the picture below:

Also, port 443 should be opened for this public IP on the firewall.

If you need more information about publicizing the whole vCloud Director, I found this excellent blog post about this topic, although it’s very useful for a general architecture of vCD deployment:

Router!

I don’t know how it has started, but I think at the moment ‘Router’ is the most misunderstood term in networking! People, even some technicians use it in wrong places. Yesterday I had a discussion with a technician who insisted to get a gateway/router IP address to do some local communication. When I asked him why you need it? I heard irrelevant explanations!  In this case, it turned out that he needed a DHCP server! but in general many think that Router/Gateway is a mandatory device in networking! while rarely they think of switch! Maybe we should blame Wireless AP producers!

Using NLB in VMware Environment

It’s very interesting that sometimes things work not in a way you expect. Well, it happens a lot in computer networking! By the way, using Microsoft Network Load Balancing in a VMware environment is one of them. In specific, when you intend to deploy Microsoft NLB in Unicast mode, you should be cautious. The reason for NLB not to work properly is well explained in the following Link:

Microsoft NLB not working properly in Unicast mode

In brief, NLB mechanism is based on hiding common, shared MAC address it assigns to all involved hosts from switch (by special kind of encapsulation I suppose) but ESX/ESXi hosts expose this common MAC address in certain conditions that will enable switch to learn the location and sends all the traffic to that specific port (ESX/ESXi host) which is against purpose of load balancer! There is a work around though which is suggested in the link above.

IP LAYER MONITORING IN VMWARE VSPHERE – 2

2 posts earlier, I talked about NetFlow in VMware 5.x and how to enable it in vSphere dvSwitch. I have also shown how you can send IP traffic flow information to a NetFlow collector. Nowadays, there are lots of commercial NetFlow collectors available; however, in this post I will introduce a simple, open-source NetFlow collector which you can use in your VMware environment to analyze IP traffic. This pretty piece of software is: ‘nfdump

As it’s shown, Nfdump has 2 major elements: ‘nfcapd‘ which is a daemon to gather and store relevant packets and ‘nfdump‘ which collects packets from all the daemons and interprets them. Apparently, nfcapd and nfdump could run on different machines and there could be multiple daemons but in case of VMware vSphere, it depends solely on the number of dvSwitches. If there is only one distributed switch, all the IP traffic flow information from all portgroups in that dvSwitch will be forwarded to one nfcapd. For test purposes, I also deployed both nfdump and nfcapd on a single linux machine but in cases that traffic is high, it maybe a good idea to deploy them on two different machines. Of course nfdump should have access to the storage in that case.

After installation, first you need to run daemon and specify a port and directory to store ip traffic information. Apparently, nfcapd will store information in binary. The command is simple, something like this:

  • nfcapd -w -D -l /var/netflow/dvswitch -p 23456

Then, daemon will run and listen to the specified port: 23456. If you have configured dvSwitch correctly (by specifying ip address of linux machine and 23456 as port) and activated monitoring on some portgroups in vCenter, this daemon will generate a couple of files in that directory.
Now, whenever you want to view the captured ip traffic flows, you should run nfdump. Since there are lots of files in that directory, you can interpret the whole directory using -R option with this command:

  • nfdump -R /var/netflow/dvswitch/

Filtering in nfdump is also possible, pretty much the same as tcpdump and you can view traffics of interest. You can find more information on nfdump website.

To view NetFlow captured traffic visually, you can mix nfsen with nfdump. It uses information that is dumped by daemon and utilizing rrdtool, it visualizes traffic flow. Installation is not difficult and you can see more information on their website. I’m really satisfied by this beautiful combination of nfdump and nfsen and if you intend to use NetFlow for monitoring, I recommend trying them. Good Luck!

IP Layer Monitoring in VMware vSphere – 1

Following my last post about administration and monitoring tasks in VMware 5, I will talk about another promising feature of VMware vSphere 5.x: supporting NetFlow. NetFlow is a network protocol developed by Cisco for collecting IP traffic information. NetFlow has become an industry standard for traffic monitoring.
As I wrote earlier, cloud/network engineers would like to have the same capabilities in virtualization as they have in physical networks and nowadays NetFlow is turning out to be the new trend in producing networking devices such as switches. In the same way switches support NetFlow, VMware implemented NetFlow that can be enabled on vSwitches, specifically very useful in Distributed switches. Good to mention that from version 5.1 VMware also supports newer version of NetFlow which is IPFIX.  You can find more information about NetFlow by itself on the internet.
Configuring NetFlow in VMware vSphere is a 2 step process:

  1. Configure NetFlow properties on the dvSwitch.

    Hints:
    – Port is a UDP port which NetFlow collector will listen on. In NFDUMP, it is 23456! by default.
    – Of course, IP session between dvSwitch and NetFlow collector should be established in a proper way. I mean dvSwitch should see NetFlow collector.

  2. Enable NetFlow on the specific dvPort group.netflow

That’s it. In the next post, I will show how you can use a free, simple NetFlow Analyzer (nfdump, nfsen) to gather and display information about IP traffic flows in your dvSwitch.

Deploying IDS in VMware vSphere

As a network or cloud administrator in VMware environment, we would like to have the same capabilities we’ve got in a physical network. One of the most important tasks is network traffic monitoring and inspection control. Let’s say you want to install a network Intrusion Detection System (like SNORT) to monitor the traffic of a specific Virtual Data Center in vCloud environment that is translated to monitoring a specific VLAN or port group in VMware vSphere. Fortunately, VMware 5.x provides these features but apparently implementing these features is beyond VMware vCloud Director operations and it’s part of infrastructure administration tasks introduced in vSphere 5.x.
Since normally there is a port group in Distributed Virtual Switch defined by vCloud Director for each virtual data center, let’s talk about port groups in VDS. You may have noticed that when you want to create a port group in a distributed switch, you can define some security policy and one of the policies is enabling ‘Promiscuous Mode’. This is exactly equivalent to enabling promiscuous mode in a physical switch. So, as shown in the following picture, a port group can be edited to enable this mode (in vSphere Web client).

promisc

The only concern is that promiscuous mode should be defined on a port group or the whole distributed switch and not on a particular port. Doing this will cause all the traffic to be forwarded to all of the VM’s in that port group! and apparently it’s a security risk because we would like to forward the traffic to only one specific VM (port) which is our IDS. A work-around here would be to define a new port group with the same VLAN ID of the port group/VLAN we would like to monitor with the exact same configuration, then enable promiscuous mode for this newly defined port group and place the IDS VM in this port group. Because VLAN ID is the same, only IDS VM would see all the traffic. That’s an easy trick! BUT I don’t know how this trick works in some vCloud port groups that use VCDNI-backed port groups instead of VLAN-backed network pools because as I understood, VCDNI is kind of encapsulation introduced by vCloud Director and I’m not sure if a port group that is created inside vCenter can decapsulate packets. I didn’t find enough information, so I will test this out and report it in this blog.

Another approach is to use Port Mirroring feature of a VDS. Using this method it’s possible to specify source ports which need to be monitored and destination port/ports where IDS is located.

This solution is explained in detail in the following link:

vSphere 5.1 – VDS Feature Enhancements – Port Mirroring