Use vault-cli in a pipeline

The other day I wanted to apply some configurations of Hashicorp Vault in a pipeline to automate some operations but Vault was a private deployment in a Kubernetes cluster. I was a bit lazy (not recommended) and also curious about this new (not anymore) trend which is ChatGPT! 🙂 So, here is my conversation and it worked. It’s not complicated but was helpful.

“I want to access an internal kubernetes service endpoint in a script. In specific, we have hashicorp vault inside kubernetes. I want to have a script that port forward to this service and then I use my local endpoint to run a couple of commands using ‘vault cli’ in the same script. can you please give me an example script?”

and the answer:

Certainly! Here’s an example script that demonstrates how you can port forward to a Kubernetes service endpoint and use the HashiCorp Vault CLI to run commands:

#!/bin/bash

# Port forward to the Vault UI service
kubectl port-forward -n hashicorp service/vault-ui 8200:8200 &
sleep 5 # Wait for port forwarding to be established

# Set the Vault endpoint to the local forwarded address
export VAULT_ADDR="http://127.0.0.1:8200"

# Authenticate to Vault (assuming token-based authentication)
export VAULT_TOKEN="your_vault_token"

# Run Vault CLI commands
vault status
vault secrets list

# Stop the port forwarding
kill %1

Lambda function for RDS Slow Query

Lambda functions are just another great tool provided by AWS to solve issues in a modern way! Using Lambda functions, you can run a micro service without a need to have a server and think of how to configure and maintain it!

There are lots of use cases for Lambda functions; here I used it to implement a service which sends alerts in case there is a slow query running in RDS. Of course slow queries are important for developers as it helps them to debug better and improve performance of the application. You can find the code here but there are some other things to be considered:

  • As you may know, there are some ways to trigger a Lambda function. In this case, using CloudWatch Events to schedule it periodically makes sense.
  • The lamda function should have some permissions to get RDS Logs and send alerts using SNS. To find out how to define required rules, please see this AWS documentation. You are also asked to do this when creating Lambda function.
  • There is a parameter named ‘distinguisher’ which is actually the keyword specifying the occurrence of slow query. For ‘Postgresql’ RDS it can be ‘
  • Parameters Group in RDS should be configured to log slow queries. To know how to do this please see AWS documentation or this guide:Enabling slow query log on Amazon RDS

ElasticSearch snapshot on S3

If you use ElasticSearch for Log analysis, you probably need to have backup and retirement strategy. It’s very handy to store a backup on a S3 bucket and configure lifecycle on that S3 bucket. I know there is a plugin (curator) that can do this but I preferred to use another approach and use ElasticSearch REST API’s. Here is a step to step guide about how to achieve this:

1) install AWS plugin:

https://www.elastic.co/guide/en/elasticsearch/plugins/current/cloud-aws.html

2) create repository in your Elasticsearch cluster:

curl -XPUT 'localhost:9200/_snapshot/backup_s3_repository?pretty' -d'
{
"type": "s3",
"settings": {
"bucket": "BUCKETNAME",
"region": "REGION",
"base_path": "DIRECTORY_NAME WITHIN BUCKET"
}
}'

Notes

  • AWS plugin should be installed on all nodes and services should be restarted to recognize plugin; otherwise you will get this error:

“Unknown [repository] type [s3]”

3) create snapshot:

curl -k -XPUT ‘https://localhost:9200/_snapshot/backup_s3_repository/snapshot_name?pretty?wait_for_completion=true’

4) create a cron job for taking snapshots (for step 3). You can skip `wait_for_completion=true` in cron job

5) Configure Lifecycle for that S3 bucket.