Mixing 802.1Q and 802.1ad in Linux

When it comes to networking, Linux kernel is really superior over Windows. Some will ask why? Apart from performance point of view, there are some great features in Linux that can not be deployed in Windows easily. To give an example, let’s think about 2 important features: support for VLAN and trunking (802.1q) and NIC teaming or Link aggregation (802.1ad).

As far as I know, Windows kernel doesn’t support 802.1q and it all depends on NIC driver and for 802.1ad Windows support starts from Windows 2012 which means it’s too young! and who knows how it works! but both are prolonged features in Linux kernel.

And these features are really useful; for example when one single computer needs to be part of different VLAN’s it needs to be connected to a trunk port on the switch; therefore should understand VLAN tags and decapsulate packets. This single computer can even act as a router between different VLAN segments. Connecting to different VLANs means more traffic, so it’s not a bad idea to double (as an example) its bandwidth by aggregating (bonding) two NIC’s to improve performance. I’m providing 2 links to show how to implement 802.1q and 802.1ad in a single Linux machine with 2 or more NIC’s:

And to have an idea about combining these 2 features, see:

Advertisements

VMWARE VSPHERE BIG DATA EXTENSIONS INSTALLATION – 2

To install VMware vSphere Big Data Extensions 1.1, if you satisfy the requirements mentioned in vmware document, go ahead with installation by deploying Big Data Extensions OVA as documented. But attention that:

  • Better to create a specific Resource Pool for your Big Data Cluster and specify the total amount of resources you want to assign and apply possible limits.
  • Create a port group dedicated to Big Data Extensions  as a communication link between management servers and working VMs.
  • When deploying Big Data Extensions Management server (OVA), ‘setup networks’ asks you to assign a destination port group. Note that: Management Network will use this network to communicate with vCenter server. So, if you use VLAN tags, the port group should be in the same VLAN (use same VLAN id) with vCenter network. If vCenter can not see Big Data Management server and vice versa, integration will not be made properly.

bigD_plugin4

  • In ‘Customize template’ step, there are 2 important settings: SSO service and Management Server IP address. So, from right-pane open ‘VC SSO Lookup Service URL’ and ‘Management Server Networks Settings’. Enter appropriate values. For SSO Lookup Service URL, use vCenter server with the same format (if you didn’t change defaults), I mean port 7444/lookupservice/sdk. Use FQDN of vCenter and not IP address or certificate will not be accepted and you will see errors for connecting Big Data Extensions plugin to Serengeti server in the future.

bigdata_sso1

Deploying IDS in VMware vSphere

As a network or cloud administrator in VMware environment, we would like to have the same capabilities we’ve got in a physical network. One of the most important tasks is network traffic monitoring and inspection control. Let’s say you want to install a network Intrusion Detection System (like SNORT) to monitor the traffic of a specific Virtual Data Center in vCloud environment that is translated to monitoring a specific VLAN or port group in VMware vSphere. Fortunately, VMware 5.x provides these features but apparently implementing these features is beyond VMware vCloud Director operations and it’s part of infrastructure administration tasks introduced in vSphere 5.x.
Since normally there is a port group in Distributed Virtual Switch defined by vCloud Director for each virtual data center, let’s talk about port groups in VDS. You may have noticed that when you want to create a port group in a distributed switch, you can define some security policy and one of the policies is enabling ‘Promiscuous Mode’. This is exactly equivalent to enabling promiscuous mode in a physical switch. So, as shown in the following picture, a port group can be edited to enable this mode (in vSphere Web client).

promisc

The only concern is that promiscuous mode should be defined on a port group or the whole distributed switch and not on a particular port. Doing this will cause all the traffic to be forwarded to all of the VM’s in that port group! and apparently it’s a security risk because we would like to forward the traffic to only one specific VM (port) which is our IDS. A work-around here would be to define a new port group with the same VLAN ID of the port group/VLAN we would like to monitor with the exact same configuration, then enable promiscuous mode for this newly defined port group and place the IDS VM in this port group. Because VLAN ID is the same, only IDS VM would see all the traffic. That’s an easy trick! BUT I don’t know how this trick works in some vCloud port groups that use VCDNI-backed port groups instead of VLAN-backed network pools because as I understood, VCDNI is kind of encapsulation introduced by vCloud Director and I’m not sure if a port group that is created inside vCenter can decapsulate packets. I didn’t find enough information, so I will test this out and report it in this blog.

Another approach is to use Port Mirroring feature of a VDS. Using this method it’s possible to specify source ports which need to be monitored and destination port/ports where IDS is located.

This solution is explained in detail in the following link:

vSphere 5.1 – VDS Feature Enhancements – Port Mirroring

vCloud Network Isolation (VCNI) Pools

As everyone mentions, vCloud Network Isolation (VCNI) is the most complicated type of network pool in VMware vCloud Director. It is a proprietary technique (apparently by VMware) that uses MAC-in-MAC encapsulation to distinguish between different private networks in a single physical VLAN.

VCNI

Among all, VCNI has a big advantage for cloud administrators: It mitigates their need to deal with physical network administrators, because multiple VLANs can be created inside a single carrier VLAN; while in other types of network pools, a VLAN should exist or be created in physical network. Also, since it uses a proprietary technique to create virtual VLANs! (I know, it’s like Virtual Virtual LAN!) the number of VLANs is not limited (to 4096). Of course it’s not infinite, but it’s a very big number: 4 Millions. See here for more details.

However, implementing this type of network pool has a trick! Again, because it encapsulates networking packets, it has its own overhead which is 24 bytes. So, assuming that you create a vCloud Network Isolation network pool (as shown above), you are not done yet. You need to change the value of MTU to 1524 (to be safe, 1600 is recommended) in 3 levels:

  1. vCloud Director – It’s a secret to me why VMware doesn’t assign 1524 by default while it knows VCNI needs it! You can do this by right-clicking over this network pool and clicking ‘Properties’, then go to: ‘Network Pool MTU’ and change it to 1600.
    MTU Change
  2.  vCenter: Go to Home, Networking, choose the distributed switch between hosts; right-click and Edit Settings, select Advanced; change the value of Maximum MTU to 1600.mtu
  3. Physical switch – Depends on your equipment, but should be done.

Now that I encountered the steps required to have an operational VCNI and also mentioned advantages, keep in mind that there are some disadvantages for this type of network pool that you can find them in this great link explaining more details:
vCloud Director Networking – Part 2 in VMware Technologies Blog

p.s – If MTU is not changed, VCNI will still work but with poor performance because of fragmentation.