Secure Logstash Configuration with Hashicorp Vault on K8s

This post is a follow-up on the topic of Hashicorp Vault on K8s expanding it on inline templates and handling of binary file content. Please check out the full sample in my Github repo.

Problem Statement

I took logstash to implement a Kafka consumer (input plugin)  and a syslog producer (output plugin). As I’m implementing syslog over TLS, both plugins need SSL certificates and java keystores. In addition to externalize passwords for config files, this configuration needs to handle binary content with Vault.

In the official documentation you can see some use cases how the control is passed to the vault init containers or sidecars. This snippet is very similar to the one in the first blog.

apiVersion: apps/v1
kind: Deployment
metadata:
   name: logstash-syslog
   namespace: monitoring
spec:
   template:
      metadata:
         annotations:
            vault.hashicorp.com/agent-inject: "true"
            vault.hashicorp.com/agent-init-first: "true"
            vault.hashicorp.com/preserve-secret-case: "true"
            vault.hashicorp.com/agent-pre-populate-only: "true"
            vault.hashicorp.com/agent-configmap: "logstash-config"
            vault.hashicorp.com/tls-secret: "vault-agent-injector-tls"
            vault.hashicorp.com/ca-cert: "/vault/tls/ca.crt"
            vault.hashicorp.com/role: "logstash-syslog"

This is the file controlling the patch that enables HashiCorp Vault to take control with an init container. The most important line is the reference to the configmap. It contains at least the vault configuration file config-init.hcl and the files referenced by it:

"auto_auth" = {
   "method" = {
      "config" = {
         "role" = "logstash-syslog"
      }
   "mount_path" = "auth/<your-env>"
   "type" = "kubernetes"
   }
"sink" = {
   "config" = {
      "path" = "/home/vault/.token"
   }
   "type" = "file"
   }
}
"exit_after_auth" = true
"pid_file" = "/home/vault/.pid"

"template" = {
   "source" = "/vault/configs/logstash.conf"
   "destination" = "/vault/secrets/logstash.conf"
}

"template" = {
   "contents" = <<EOF
{{ with secret "kv-v2/secret/esc-logs" -}}{{ base64Decode .Data.data.client_crt }}{{- end }}
EOF
   "destination" = "/vault/secrets/client.crt"
}

"template" = {
   "contents" = <<EOF
{{ with secret "kv-v2/secret/esc-logs" -}}{{ base64Decode .Data.data.client_key }}{{- end }}
EOF
   "destination" = "/vault/secrets/client.key"
}

"template" = {
   "contents" = <<EOF
{{ with secret "kv-v2/secret/esc-logs" -}}{{ base64Decode .Data.data.<trusted_keystore>_jks }}{{- end }}
EOF
   "destination" = "/vault/secrets/<trusted_keystore>.jks"
}

"template" = {
   "contents" = <<EOF
{{ with secret "kv-v2/secret/esc-logs" -}}{{ base64Decode .Data.data.<keystore>_jks }}{{- end }}
EOF
   "destination" = "/vault/secrets/<keystore>.jks"
}

"vault" = {
   "address" = "<your vault>"
   "ca_cert" = "/vault/tls/ca.crt"
   "client_cert" = "/vault/tls/tls.crt"
   "client_key" = "/vault/tls/tls.key"
}

Let’s first follow the highlighted statement (19-22). Here the main logstash configuration file gets preprocessed by Vault: the consul template expressions get evaluated and replaced by values retrieved from Vault.

Logstash Configuration

If you want to launch logstash without parameters, you should edit pipelines.yml. I simply removed the comments from one of the sample pipelines and set some reasonable params. Without this step you will be forced to call “bin/logstash -f /vault/secrets/logstash.conf”.

# Example of two pipelines:
#
# - pipeline.id: test
# pipeline.workers: 1
# pipeline.batch.size: 1
# config.string: "input { generator{} } filter {sleep { time => 1 } } output { stdout { codec => dots } }"
- pipeline.id: esc-logs
  queue.type: persisted
  pipeline.workers: 3
  path.config: "/vault/secrets/logstash.conf"

This is the referenced file logstash.conf:

input {
   kafka {
      bootstrap_servers => "<your kafka servers>"
      topics => ["<your topic>"]
      group_id => "<your group>"
      security_protocol => "SSL"
      ssl_truststore_location => "/vault/secrets/<trusted_keystore>.jks"
      ssl_truststore_password => '{{- with secret "kv-v2/secret/esc-logs" -}}{{ .Data.data.keystore_password }}{{- end }}'
      ssl_keystore_location => "/vault/secrets/<keystore>.jks"
      ssl_keystore_password => '{{- with secret "kv-v2/secret/esc-logs" -}}{{ .Data.data.keystore_password }}{{- end }}'
      ssl_key_password => '{{- with secret "kv-v2/secret/esc-logs" -}}{{ .Data.data.keystore_password }}{{- end }}'
      codec => "json"
   }
}

output {
   syslog {
      id => "syslog sender"
      sourcehost => "<your syslog source host>"
      protocol => "ssl-tcp"
      host => "<your syslog host>"
      port => "6514"
      codec => "json"
      appname => "esc-logs"
      ssl_key => "/vault/secrets/client.key"
      ssl_cert => "/vault/secrets/client.crt"
   }
}

This configuration activates the Kafka plugin as input and syslog as output – no filtering applied whatsoever to keep it really simple. The password for the keystores and keys are scripted with the consul template expressions. These will be replaced by the vault-agent-init container before the logstash container will be started. And it points to binary files that needs to be downloaded from vault and mounted in the container. Vault does this without further configuration with following convention: /vault/secrets is the default mount path. In my sample I mounted the whole configmap to provide the contents of the logstash config directory.

Binary Files in Vault

Vault supports storing file based secrets base64 encoded:

base64 <keystore>.jks | vault kv put secret/<your-path> <your-key>=-

Retrieving the files is quite a simple task with HCL provided you find some good samples:

...
"template" = {
   "contents" = <<EOF
{{ with secret "kv-v2/secret/esc-logs" -}}{{ base64Decode .Data.data.<keystore>_jks }}{{- end }}
EOF
   "destination" = "/vault/secrets/<keystore>.jks"
}
...

Multiple template entries are allowed and they are quite flexible. Here I only used “source” for input file paths and “contents” to include HCL statements inline. The trick is to use the linux mechanisms to create an inline file input. This statement evaluated in fetching the (string) contents of the attribute “<keystore>_jks” on the given path, decode it and stored at the destination.

Very important: the HCL statements are omitted or replaced by the data from Vault. If you start a line with a space, this space will be the first byte of the generated file. That’s why an ident is not working here. And another pitfall I can hardly avoid: never use dashes in your Vault attribute names, always switch to underscore.

Be the first to comment

Leave a Reply

Your email address will not be published.


*