This guide describes how to:
Elastic Stack (self-managed or Elastic Cloud) version 8.16.0 or higher, or an Elasticsearch serverless project.
A Kubernetes version supported by the OpenTelemetry Operator (refer to the operator’s compatibility matrix for more details).
If you opt for automatic certificate generation and renewal on the OpenTelemetry Operator, you need to install cert-manager in the Kubernetes cluster. By default, the operator installation uses a self-signed certificate and doesn’t require cert-manager.
The minimum supported version of the Elastic Stack for OpenTelemetry-based monitoring on Kubernetes is 8.16.0
. Different Elastic Stack releases support specific versions of the kube-stack Helm Chart.
The following is the current list of supported versions:
Stack Version | Helm Chart Version | Values file |
---|---|---|
Serverless | 0.3.0 | values.yaml |
8.16.0 | 0.3.0 | values.yaml |
When installing the release, ensure you use the right --version
and -f <values-file>
parameters. Values files are available in the resources directory.
The OpenTelemetry Operator is a Kubernetes Operator implementation designed to manage OpenTelemetry resources in a Kubernetes environment. It defines and oversees the following Custom Resource Definitions (CRDs):
All signals including logs, metrics, and traces are processed by the collectors and sent directly to Elasticsearch using the ES exporter. A collector’s processor pipeline replaces the traditional APM server functionality for handling application traces.
The kube-stack Helm Chart is used to manage the installation of the operator (including its CRDs) and to configure a suite of collectors, which instrument various Kubernetes components to enable comprehensive observability and monitoring.
The chart is installed with a provided default values.yaml
file that can be customized when needed.
The OpenTelemetry components deployed within the DaemonSet collectors are responsible for observing specific signals from each node. To ensure complete data collection, these components must be deployed on every node in the cluster. Failing to do so will result in partial and potentially incomplete data.
The DaemonSet collectors handle the following data:
The OpenTelemetry components deployed within a Deployment collector focus on gathering data at the cluster level rather than at individual nodes. Unlike DaemonSet collectors, which need to be deployed on every node, a Deployment collector operates as a standalone instance.
The Deployment collector handles the following data:
The Helm Chart is configured to enable zero-code instrumentation using the Operator’s Instrumentation resource for the following programming languages:
The guided onboarding simplifies deploying your Kubernetes components by setting up an API Key and the needed Integrations in the background. Follow these steps to use the guided onboarding:
values.yaml
.Notes on installing the OpenTelemetry Operator:
elastic_endpoint
shown in the installation command is valid for your environment. If not, replace it with the correct Elasticsearch endpoint.elastic_api_key
shown in the installation command corresponds to an API key created by Kibana when the onboarding process is initiated.[!NOTE] The default installation deploys an OpenTelemetry Operator with a self-signed TLS certificate. To automatically generate and renew publicly trusted certificates, refer to cert-manager integrated installation for instructions on customizing the
values.yaml
file before running thehelm install
command.
Before installing the operator follow these actions:
Create an API Key, and make note of its value. (TBD: details of API key permissions).
Install the following integrations in Kibana:
System
Kubernetes
Kubernetes OpenTelemetry Assets
Notes:
opentelemetry-operator-system
Kubernetes namespace:
$ kubectl create namespace opentelemetry-operator-system
kubectl create -n opentelemetry-operator-system secret generic elastic-secret-otel \
--from-literal=elastic_endpoint='YOUR_ELASTICSEARCH_ENDPOINT' \
--from-literal=elastic_api_key='YOUR_ELASTICSEARCH_API_KEY'
Don’t forget to replace
YOUR_ELASTICSEARCH_ENDPOINT
: your Elasticsearch endpoint (with https://
prefix example: https://1234567.us-west2.gcp.elastic-cloud.com:443
).YOUR_ELASTICSEARCH_API_KEY
: your Elasticsearch API Key$ helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
$ helm repo update
$ helm upgrade --install --namespace opentelemetry-operator-system opentelemetry-kube-stack open-telemetry/opentelemetry-kube-stack --values ./resources/kubernetes/operator/helm/values.yaml --version 0.3.0
Regardless of the installation method followed, perform the following checks to verify that everything is running properly:
__logs-*__
data view.__metrics-*__
data view.To enable auto-instrumentation, add the corresponding annotation to the pods of existing deployments (spec.template.metadata.annotations
), or to the desired namespace (to auto-instrument all pods in the namespace):
metadata:
annotations:
instrumentation.opentelemetry.io/inject-<LANGUAGE>: "opentelemetry-operator-system/elastic-instrumentation"
where <LANGUAGE>
is one of: go
, java
, nodejs
, python
, dotnet
For detailed instructions and examples on how to instrument applications in Kubernetes using the OpenTelemetry Operator, refer to Instrumenting applications.
For troubleshooting details and verification steps, refer to Troubleshooting auto-instrumentation.
In Kubernetes, for the API server to communicate with the webhook component (created by the operator), the webhook requires a TLS certificate that the API server is configured to trust. The default provided configuration sets the Helm Chart to auto generate the required certificate as a self-signed certificate with an expiration policy of 365 days. These certificates won’t be renewed if the Helm Chart’s release is not manually updated. For production environments, we highly recommend using a certificate manager like cert-manager.
Integrating the operator with cert-manager enables automatic generation and renewal of publicly trusted TLS certificates. This section assumes that cert-manager and its CRDs are already installed in your Kubernetes environment. If that’s not the case, refer to the cert-manager installation guide before continuing.
Follow any of the following options to install the OpenTelemetry Operator Helm Chart integrated with cert-manager
:
--set opentelemetry-operator.admissionWebhooks.certManager.enabled=true --set opentelemetry-operator.autoGenerateCert=null
to the installation command. For example:helm upgrade --install --namespace opentelemetry-operator-system opentelemetry-kube-stack open-telemetry/opentelemetry-kube-stack \
--values ./resources/kubernetes/operator/helm/values.yaml --version 0.3.3 \
--set opentelemetry-operator.admissionWebhooks.certManager.enabled=true --set opentelemetry-operator.admissionWebhooks.autoGenerateCert=null
Keep an updated copy of the values.yaml
file by following these steps:
Update the values.yaml
file with the following changes:
Enable cert-manager integration for admission webhooks.
opentelemetry-operator:
admissionWebhooks:
certManager:
enabled: true # Change from `false` to `true`
Remove the generation of a self-signed certificate.
# Remove the following lines:
autoGenerateCert:
enabled: true
recreate: true
Run the installation (or upgrade) command pointing to the updated file. For example, assuming that the updated file has been saved as values_cert-manager.yaml
:
helm upgrade --install --namespace opentelemetry-operator-system opentelemetry-kube-stack open-telemetry/opentelemetry-kube-stack \
--values ./resources/kubernetes/operator/helm/values_cert-manager.yaml --version 0.3.0