Karpenter in EKS and CloudTrail Events

We recently started to migrate from CAS (Kubernetes AutoScaler) and EC2 ASG (AutoScaling Groups) to Karpenter in some of our EKS clusters. So far so good and I’m happy with the results, especially because of excellent use of Spot instances 🙂 and reducing our EC2 costs but I noticed something interesting about CloudTrail logs.

I noticed that our CloudTrail costs in the accounts with Karpenter are slightly increased. Looking closer, I saw a lot of UpdateInstanceInformation Events and the Identity source for these events was Karpenter Node Role making calls to AWS SSM. It makes sense because Karpenter actually comes with SSM agent and SSM Agent calls this API in Systems Manager service every 5 minutes to provide heartbeat information. So, if you configured CloudTrail to log all management events, you will see this event more often when you have an EKS cluster with Karpenter.

Use vault-cli in a pipeline

The other day I wanted to apply some configurations of Hashicorp Vault in a pipeline to automate some operations but Vault was a private deployment in a Kubernetes cluster. I was a bit lazy (not recommended) and also curious about this new (not anymore) trend which is ChatGPT! 🙂 So, here is my conversation and it worked. It’s not complicated but was helpful.

“I want to access an internal kubernetes service endpoint in a script. In specific, we have hashicorp vault inside kubernetes. I want to have a script that port forward to this service and then I use my local endpoint to run a couple of commands using ‘vault cli’ in the same script. can you please give me an example script?”

and the answer:

Certainly! Here’s an example script that demonstrates how you can port forward to a Kubernetes service endpoint and use the HashiCorp Vault CLI to run commands:

#!/bin/bash

# Port forward to the Vault UI service
kubectl port-forward -n hashicorp service/vault-ui 8200:8200 &
sleep 5 # Wait for port forwarding to be established

# Set the Vault endpoint to the local forwarded address
export VAULT_ADDR="http://127.0.0.1:8200"

# Authenticate to Vault (assuming token-based authentication)
export VAULT_TOKEN="your_vault_token"

# Run Vault CLI commands
vault status
vault secrets list

# Stop the port forwarding
kill %1

Using Flux in Microk8s

This post is a rather short one like a real tweet but I hope it help people who might be in the same situation as me and can’t find much information online.

This time we are talking about Microk8s and Flux. Microk8s is a great implementation of Kubernetes especially useful for edge and IoT which is approved project in CNCF. Flux is an implementation for GitOps by Weaveworks which is also a candidate project of CNCF. It’s also a nice tool which brings kubernetes configuration management into another level.

They work well together but there is a small tweak required in one of the steps. When you want to authorize your flux to access your github repository (SSH based) you need public key to be introduced as Deploy Key in Github. The command which works for microk8s is the following:

microk8s.kubectl -n flux --kubeconfig /var/snap/microk8s/current/credentials/client.config logs deployment/flux | grep identity.pub | cut -d '"' -f2

That’s it and you will get the required public key.