OpenTelemetry

OpenTelemetry integration in Kyverno.

OpenTelemetry Setup

Setting up OpenTelemetry requires configuration of a few YAML files. The required configurations are listed below.

Install Cert-Manager

Install Cert-Manager by following the documentation.

Config file for OpenTelemetry Collector

Create a configmap.yaml file in the kyverno Namespace with the following content:

1apiVersion: v1 2kind: ConfigMap 3metadata: 4 name: collector-config 5 namespace: kyverno 6data: 7 collector.yaml: | 8 receivers: 9 otlp: 10 protocols: 11 grpc: 12 endpoint: ":8000" 13 processors: 14 batch: 15 send_batch_size: 10000 16 timeout: 5s 17 extensions: 18 health_check: {} 19 exporters: 20 jaeger: 21 endpoint: "jaeger-collector.observability.svc.cluster.local:14250" 22 tls: 23 insecure: true 24 prometheus: 25 endpoint: ":9090" 26 logging: 27 loglevel: debug 28 service: 29 extensions: [health_check] 30 pipelines: 31 traces: 32 receivers: [otlp] 33 processors: [] 34 exporters: [jaeger, logging] 35 metrics: 36 receivers: [otlp] 37 processors: [batch] 38 exporters: [prometheus, logging]
yaml
  • Here the Prometheus exporter endpoint is set as 9090 which means Prometheus will be able to scrape this service on the given endpoint to collect metrics.
  • Similarly, the Jaeger endpoint references a Jaeger collector at the default Jaeger endpoint 14250.

The Collector Deployment

Create a deployment.yaml file in the kyverno Namespace with the following content:

1apiVersion: apps/v1 2kind: Deployment 3metadata: 4 name: opentelemetrycollector 5 namespace: kyverno 6spec: 7 replicas: 1 8 selector: 9 matchLabels: 10 app.kubernetes.io/name: opentelemetrycollector 11 template: 12 metadata: 13 labels: 14 app.kubernetes.io/name: opentelemetrycollector 15 spec: 16 containers: 17 - name: otelcol 18 args: 19 - --config=/conf/collector.yaml 20 image: otel/opentelemetry-collector:0.50.0 21 volumeMounts: 22 - name: collector-config 23 mountPath: /conf 24 volumes: 25 - configMap: 26 name: collector-config 27 items: 28 - key: collector.yaml 29 path: collector.yaml 30 name: collector-config
yaml

This references the collector defined in the configmap.yaml above. Here we are using a Deployemtn with just a single replica. Ideally, a DaemonSet is preferred. Check the OpenTelemetry documentation for more deployment strategies.

The Collector Service

Finally, create a service.yaml file in the kyverno Namespace with the following content:

1apiVersion: v1 2kind: Service 3metadata: 4 name: opentelemetrycollector 5 namespace: kyverno 6spec: 7 ports: 8 - name: otlp-grpc 9 port: 8000 10 protocol: TCP 11 targetPort: 8000 12 - name: metrics 13 port: 9090 14 protocol: TCP 15 targetPort: 9090 16 selector: 17 app.kubernetes.io/name: opentelemetrycollector 18 type: ClusterIP
yaml

This defines a Service for the discovery of the collector Deployment.

Setting up Kyverno and passing required flags

See the installation instructions for Kyverno. Depending on the method used, the following flags must be passed.

  • Pass the flag metricsPort to defined the OpenTelemetry Collector endpoint for collecting metrics.
  • Pass the flag otelConfig=grpc to export the metrics and traces to an OpenTelemetry collector on the metrics port endpoint

Setting up a secure connection between Kyverno and the collector

Kyverno also supports setting up a secure connection with the OpenTelemetry exporter using TLS on the server-side (on the collector). This will require you to create a certificate-key pair for the OpenTelemetry collector from some private CA and then saving the certificate as a Secret in your Kyverno Namespace with key named ca.pem.

Considering you already have the server.pem and server-key.pem files along with the ca.pem file (you can configure these using a tool such as OepnSSL or cfssl). Your OpenTelemetry configmap.yaml and deployment.yaml files will also change accordingly:

configmap.yaml

1apiVersion: v1 2kind: ConfigMap 3metadata: 4 name: collector-config 5 namespace: kyverno 6data: 7 collector.yaml: | 8 receivers: 9 otlp: 10 protocols: 11 grpc: 12 endpoint: ":8000" 13 tls: 14 cert_file: /etc/ssl/certs/server/server.pem 15 key_file: /etc/ssl/certs/server/server-key.pem 16 ca_file: /etc/ssl/certs/ca/ca.pem 17 processors: 18 batch: 19 send_batch_size: 10000 20 timeout: 5s 21 extensions: 22 health_check: {} 23 exporters: 24 jaeger: 25 endpoint: "jaeger-collector.observability.svc.cluster.local:14250" 26 tls: 27 insecure: true 28 prometheus: 29 endpoint: ":9090" 30 logging: 31 loglevel: debug 32 service: 33 extensions: [health_check] 34 pipelines: 35 traces: 36 receivers: [otlp] 37 processors: [] 38 exporters: [jaeger, logging] 39 metrics: 40 receivers: [otlp] 41 processors: [batch] 42 exporters: [prometheus, logging]
yaml

deployment.yaml

1apiVersion: apps/v1 2kind: Deployment 3metadata: 4 name: opentelemetrycollector 5 namespace: kyverno 6spec: 7 replicas: 1 8 selector: 9 matchLabels: 10 app.kubernetes.io/name: opentelemetrycollector 11 template: 12 metadata: 13 labels: 14 app.kubernetes.io/name: opentelemetrycollector 15 spec: 16 containers: 17 - name: otelcol 18 args: 19 - --config=/conf/collector.yaml 20 image: otel/opentelemetry-collector:0.50.0 21 volumeMounts: 22 - name: collector-config 23 mountPath: /conf 24 - name: otel-collector-secrets 25 mountPath: /etc/ssl/certs/server 26 - name: root-ca 27 mountPath: /etc/ssl/certs/ca 28 volumes: 29 - configMap: 30 name: collector-config 31 items: 32 - key: collector.yaml 33 path: collector.yaml 34 name: collector-config 35 - secret: 36 secretName: otel-collector-secrets 37 name: otel-collector-secrets 38 - secret: 39 secretName: root-ca 40 name: root-ca
yaml

This will ensure that the OpenTelemetry collector can only accept encrypted data on the receiver endpoint.

Pass the flag transportCreds as the Secret name containing the ca.pem file (Empty string means insecure connection will be used).

Setting up Prometheus

  • For the metrics backend, you can install Prometheus on you cluster. For a general example, we have a ready-made configuration for you. Install Prometheus by running:
1kubectl apply -k github.com/kyverno/grafana-dashboard/examples/prometheus
bash
  • Port-forward the Prometheus service to view the metrics on localhost.
1kubectl port-forward svc/prometheus-server 9090:9090 -n kyverno
bash

Setting up Jaeger

The traces are pushed to the Jaeger backend on port 14250. To install Jaeger:

First install the Jaeger Operator. Replace the version as needed.

1kubectl create namespace observability 2kubectl create -n observability -f https://github.com/jaegertracing/jaeger-operator/releases/download/v1.33.0/jaeger-operator.yaml 3kubectl wait --for=condition=Available deployment --timeout=2m -n observability --all
bash

Create a Jaeger resource configuration as shown below

jaeger.yaml

1apiVersion: jaegertracing.io/v1 2kind: Jaeger 3metadata: 4 name: jaeger 5 namespace: observability
yaml

Install the Jaeger backend

1kubectl create -f jaeger.yaml
bash

Port-forward the Jaeger Service on 16686 to view the traces.

1kubectl port-forward svc/jaeger-query 16686:16686 -n observability
bash

Last modified October 11, 2022 at 2:58 PM PST: More 1.8 docs (#643) (571d723)