Using Flux in Microk8s

This post is a rather short one like a real tweet but I hope it help people who might be in the same situation as me and can’t find much information online.

This time we are talking about Microk8s and Flux. Microk8s is a great implementation of Kubernetes especially useful for edge and IoT which is approved project in CNCF. Flux is an implementation for GitOps by Weaveworks which is also a candidate project of CNCF. It’s also a nice tool which brings kubernetes configuration management into another level.

They work well together but there is a small tweak required in one of the steps. When you want to authorize your flux to access your github repository (SSH based) you need public key to be introduced as Deploy Key in Github. The command which works for microk8s is the following:

microk8s.kubectl -n flux --kubeconfig /var/snap/microk8s/current/credentials/client.config logs deployment/flux | grep identity.pub | cut -d '"' -f2

That’s it and you will get the required public key.

Advertisement

Integrating Kibana in Elastic Cloud with Okta – Step by Step

Implementing SSO is very useful when teams grow. Okta is well known as Identity provider and in specific for SSO. Elastic is also well known for their great products including Elasticsearch and Kibana! Elastic started its hosted service (Elastic Cloud) and they added nice features such as Hot/Warm deployments which made it popular. They both have good documentation but when it comes to this specific integration, things are not clear. I spent some time and communicated with support on both sides and in this post I will show how to integrate Kibana hosted by Elastic Cloud with Okta as IdP, Step by Step:

  • First step is to configure Okta side to get the Assertion XML. Go to Okta Admin page and Add an application. Choose a SAML 2.0 App. Then you have to specify some basic information such as App name and Logo. Next is SAML Settings which is the important part. In specific the following parameters should be defined:
    1. Single sign on URL: for Elastic cloud the format is:
      https://YOUR_CLUSTER_ADDRESS:9243/api/security/v1/saml
      please note that /api/security/v1/saml is fixed (at least by the time this post is written)
    2. Audience URI (SP Entity ID): This is exactly the URL of your Kibana in Elastic Cloud but please don’t forget / at the end:
      https://YOUR_CLUSTER_ADDRESS:9243/
    3. Name ID Format: depends on your Okta usernames. In my case it’s EmailAddress

      SAML setting
    4. Group Attribute Statements: This is very important for granular Access management and role mapping in Kibana.
      For the Name specify groups. This is important and then for better management you can specify a filter to map groups that contain kibana (as an example). It will filter groups that you created in Okta directory and will help in mapping with Kibana/Elasticsearch x-pack roles. for example you can create a group in Okta with a name like kibana_admins and add Okta users that you want to have superuser privilege in Elasticsearch to this group. We will come back to this mapping later.
      group attribute
    5. It’s almost done now at Okta side. You can review and check the guide which is given by Okta about how to introduce Assertion and Metadata to service provider (Kibana/Elasticsearch)
  • in Next step is configuring Elasticsearch. The main guide to do this on Elastic Cloud is the following:
    https://www.elastic.co/guide/en/cloud/current/ec-securing-clusters-SAML.html
    You can edit elasticsearch.yml by going to Cloud Console and choosing your deployment and then Edit option and then `User Setting Override`:
    edit elasticsearch

    The following values are important:
    1. attributes.principal: you can see the explanation what this is but in case of Okta, apparently its value should be nameid
    2. attributes.groups: It should be in line with item 4 of previous step (Okta) but I recommend to use exactly groups as value
    3. idp.metadata.path: It is something like the following:
      https://YOURCOMPANY.okta.com/app/OKTA_APP_ID/sso/saml/metadata
      but you can get it by visiting Sign On configuration in Okta. In the following picture see the link pointing to Metadata in blue at the bottom.
      metadata link
    4. idp.entity_id: If you check Setup Instructions in Sign On configuration (picture above), it’s mentioned but it looks like the following:
      http://www.okta.com/OKTA_APP_ID
    5. sp.entity_id: As the guide says it should be:
      "KIBANA_ENDPOINT_URL/" but keep in mind that it should align with item 2 of previous step in Okta and again, don’t forget / 🙂
    6. The rest is straight forward and you won’t miss them
  • Tips:
    • This is very important and took me and Elastic support a lot of time to troubleshoot: If you have Hot and Warm nodes, you should apply the configuration to both type of nodes and there are separate elasticsearch.yml files on Cloud Console
    • The value for attributes.principal should be exactly nameid and nameid:persistent won’t work.
    • You must use the SAML realm name cloud-saml (mentioned in the guide)
  • Next step is to do role mapping. You can read about it here:
    https://www.elastic.co/guide/en/elastic-stack-overview/7.3/saml-role-mapping.html
    So far we setup Okta to send some metadata along with the auth response and using these API’s we have to map the groups in Okta with Roles in ElasticSearch. For example I have 2 groups in Okta named: kibana_operators and kibana_admins. using the following mappings I map them to Monitor and superuser roles in ElasticSearch:
######## Role Mapping 1 - Operators #######
PUT /_security/role_mapping/saml-kibana-operators
{
  "roles": [ "Monitor" ],
  "enabled": true,
  "rules": { "all" : [
      { "field": { "realm.name": "cloud-saml" } },
      { "field": { "groups": "kibana_operators" } }
  ]},
  "metadata": { "version": 1 }
}

######## Role Mapping 2 - Admins #######
PUT /_security/role_mapping/saml-kibana-admins
{
   "enabled": true,
    "roles": [ "superuser" ],
    "rules": { "all" : [
        { "field": { "realm.name": "cloud-saml" } },
        { "field": { "groups": "kibana_admins" } }
    ]},
    "metadata": { "version": 1 }
}
  • Finally, you have to configure Kibana to use SAML as authentication mechanism. This step is straightforward as mentioned as Step 6 of this guide. Just Edit kibana.yml by using User setting Overrides in Elastic Cloud Console and specify the values accordingly.

And that’s it! Now you should be able to create users in Okta and add them to appropriate group to enable them to access Kibana!

Key authentication with SSH Secure Shell

Non-commercial version of SSH Secure Shell (can be obtained here) from SSH Communications Security is a decent ssh client that I have used for many years in my experiments and academic works. It lacks PKI and PKCS functionality, but still safe for experiments! However; when it comes to public key authentication, it needs some tweaks to work. Here are the steps required to enable key authentication over a Linux host; given that Linux host settings allow public/private key authentication:

  1. Connect to the host using SSH Secure Shell (by password)
  2. In Secure Shell client, go to: Edit -> Settings -> User Authentication -> Keys and click on ‘Generate New’
    ssh1
  3. When generation is done, it will ask you to upload the public key to the host. Let it upload to ‘.ssh ‘ as destination folder.
    ssh2
  4. It assumes that the host has the appropriate SSH server for this client (the company has SSH server too) but since standard Linux servers use OpenSSH as SSH server, uploading the public key to the host is not enough and needs some modifications that follows.
  5. In Linux host, you will see that a public key (KeyAuthTest.pub in this case) is uploaded in ‘.ssh’ directory. For this to work, there are 2 ways:
    • Edit ‘KeyAuthTest.pub’ manually! and give it the right format. Remove these lines (or something like this) in the beginning:
      —- BEGIN SSH2 PUBLIC KEY —-
      Comment: “[3072-bit rsa, yyyy@xxxx, Thu Oct 04 2012 21:33:49]”
      And this at the end:
      —- END SSH2 PUBLIC KEY —-
      Also, you need to remove all the carriage returns (CR) in this file. Then add ‘ssh-rsa’ in the beginning of the file. The file would be something like:
      ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC+…
      Finally, in shell append this file to the ‘authorized_keys’ file :
      cat ~/.ssh/KeyAuthTest.pub >> ~/.ssh/authorized_keys
    • Second approach: convert the key to proper OpenSSH format automatically and append it to the file:
      ssh-keygen -i -f ~/.ssh/KeyAuthTest.pub  >>  ~/.ssh/authorized_keys

Now, you will be able to connect to the host, using this public key.

VPN access for vCloud Director Customers

Providing VPN access to vCD customers is a great idea, because usually customers are behind a vCloud created firewall and most likely you created a routed organization VDC network to connect them to external network. So, how they have to get access to their VDC? One approach could be to define different sets of firewall and NAT rules for required access ports (SSH, RDP, …) but when the number of VM’s grow, this would be less flexible but of course still doable. Even a customer can get access to only one VM and go through this single VM to access the others; however sometimes it’s not a simple remote access and remote user wants to do a more advanced task.

By the way, I don’t want to go into the details of benefits of having a VPN for remote clients but it seems like a very helpful facility for cloud customers. We can leave it to the user to install its own VPN server to tunnel through to get access to organization VDC network but VMware provides this excellent capability to setup VPN gateways in vCloud Director or vSphere Cluster. For a Site-to-Site IPsec VPN, VMware vCD is pretty much straight forward. So, if you have a VPN gateway in place, easily you can establish a tunnel between your local network and your organization network in the cloud. I found this guide about setting up an IPSec tunnel in vCloud Director with useful examples, one with a Cisco WAN router. Here is another guide for a Cisco PIX and vCD; although the vCD version is old (1.5) but it’s too similar in terms of VPN tunnelling.

However, if you don’t have a VPN endpoint in-place and still want to establish a secure VPN-connection with your vCD organization network as a remote user, VMware provides this brilliant SSL VPN utility. It’s not as straight forward as IPsec VPN and it’s not present in vCD web portal but it worths deploying (especially for customers). VMware SSL VPN should be configured in vCloud Networking and Security solution (which is a new name for vShield Manager).

I’m not writing a How-To for this here and a complete step by step guide can be found here, very well explained by Ranga Maddipudi. I just wanted to give some idea and as you can see, deploying a SSL VPN gateway is fairly easy and an installable file (.exe file for Windows) will be provided. To get this file, on the client side use should use the browser to download the file. The URL for downloading the package would be: https://external-ip-address-of-gateway:443
After getting the file, user can easily install the VPN client and that’s it.

SSL-VPN1

Running the application and entering the right credentials, VPN connection will be established and given that the configurations are server side are well defined, remote user will get access to VDC organization network in the cloud. In fact, what excites me is that from engineering point of view, VMware did a great job to ease the whole procedure of setup a connection on both server and client side; in specific, generating a custom designed VPN client using SSL (as authentication and encryption protocol) VPN is a brilliant idea.

SSL-VPN2

Deploying IDS in VMware vSphere

As a network or cloud administrator in VMware environment, we would like to have the same capabilities we’ve got in a physical network. One of the most important tasks is network traffic monitoring and inspection control. Let’s say you want to install a network Intrusion Detection System (like SNORT) to monitor the traffic of a specific Virtual Data Center in vCloud environment that is translated to monitoring a specific VLAN or port group in VMware vSphere. Fortunately, VMware 5.x provides these features but apparently implementing these features is beyond VMware vCloud Director operations and it’s part of infrastructure administration tasks introduced in vSphere 5.x.
Since normally there is a port group in Distributed Virtual Switch defined by vCloud Director for each virtual data center, let’s talk about port groups in VDS. You may have noticed that when you want to create a port group in a distributed switch, you can define some security policy and one of the policies is enabling ‘Promiscuous Mode’. This is exactly equivalent to enabling promiscuous mode in a physical switch. So, as shown in the following picture, a port group can be edited to enable this mode (in vSphere Web client).

promisc

The only concern is that promiscuous mode should be defined on a port group or the whole distributed switch and not on a particular port. Doing this will cause all the traffic to be forwarded to all of the VM’s in that port group! and apparently it’s a security risk because we would like to forward the traffic to only one specific VM (port) which is our IDS. A work-around here would be to define a new port group with the same VLAN ID of the port group/VLAN we would like to monitor with the exact same configuration, then enable promiscuous mode for this newly defined port group and place the IDS VM in this port group. Because VLAN ID is the same, only IDS VM would see all the traffic. That’s an easy trick! BUT I don’t know how this trick works in some vCloud port groups that use VCDNI-backed port groups instead of VLAN-backed network pools because as I understood, VCDNI is kind of encapsulation introduced by vCloud Director and I’m not sure if a port group that is created inside vCenter can decapsulate packets. I didn’t find enough information, so I will test this out and report it in this blog.

Another approach is to use Port Mirroring feature of a VDS. Using this method it’s possible to specify source ports which need to be monitored and destination port/ports where IDS is located.

This solution is explained in detail in the following link:

vSphere 5.1 – VDS Feature Enhancements – Port Mirroring

Securing Access to VMware vCenter

Since VMware vCenter uses ports 80, 443 to provide access to management console (for both vSphere Client and Web Console), it’s important to secure these ports. Having said that, it can be limiting access to specific IP addresses in your internal network. If there is no firewall between your internal network and Cloud infrastructure, at least configure Firewall in Windows machine (if vCenter is installed on Windows) to restrict access.

Also, for a complete list of tasks to harden vCenter Security, you can see Security Advisories, Guides document from VMware.