This guide describes how to:
Elastic Stack (self-managed or Elastic Cloud) version 8.16.0 or higher, or an Elasticsearch serverless project.
A Kubernetes version supported by the OpenTelemetry Operator (refer to the operator’s compatibility matrix for more details).
If you opt for automatic certificate generation and renewal on the OpenTelemetry Operator, you need to install cert-manager in the Kubernetes cluster. By default, the operator installation uses a self-signed certificate and doesn’t require cert-manager.
The minimum supported version of the Elastic Stack for OpenTelemetry-based monitoring on Kubernetes is 8.16.0
. Different Elastic Stack releases support specific versions of the kube-stack Helm chart.
The following is the current list of supported versions:
Stack Version | Helm chart Version | Values file |
---|---|---|
Serverless | 0.3.3 | values.yaml |
8.16.0 | 0.3.3 | values.yaml |
When installing the release, ensure you use the right --version
and -f <values-file>
parameters. Values files are available in the resources directory.
Getting started with OpenTelemetry for Kubernetes observability requires an understanding of the following components, their functions, and interactions: OpenTelemetry Operator, Collectors, kube-stack Helm Chart, and auto-instrumentation resources.
The guided onboarding simplifies deploying your Kubernetes components by setting up an API Key and the needed Integrations in the background. Follow these steps to use the guided onboarding:
values.yaml
.Notes on installing the OpenTelemetry Operator:
elastic_endpoint
shown in the installation command is valid for your environment. If not, replace it with the correct Elasticsearch endpoint.elastic_api_key
shown in the installation command corresponds to an API key created by Kibana when the onboarding process is initiated.[!NOTE] The default installation deploys an OpenTelemetry Operator with a self-signed TLS certificate. To automatically generate and renew certificates, refer to cert-manager integrated installation for instructions on customizing the
values.yaml
file before running thehelm install
command.
Before installing the operator follow these actions:
Create an API Key, and make note of its value. (TBD: details of API key permissions).
Install the following integrations in Kibana:
System
Kubernetes
Kubernetes OpenTelemetry Assets
Notes:
Create the opentelemetry-operator-system
Kubernetes namespace:
$ kubectl create namespace opentelemetry-operator-system
Create a secret in the created namespace with the following command:
kubectl create -n opentelemetry-operator-system secret generic elastic-secret-otel \
--from-literal=elastic_endpoint='YOUR_ELASTICSEARCH_ENDPOINT' \
--from-literal=elastic_api_key='YOUR_ELASTICSEARCH_API_KEY'
Don’t forget to replace
YOUR_ELASTICSEARCH_ENDPOINT
: Elasticsearch endpoint (with https://
prefix). For example: https://1234567.us-west2.gcp.elastic-cloud.com:443
.YOUR_ELASTICSEARCH_API_KEY
: Elasticsearch API Key created in the previous step.If you need to customize the configuration, make a copy of the values.yaml
file and adapt it to your needs. Refer to the compatibility matrix for a complete list of available manifests in the release branches
, such as the 8.16 values file.
Run the following commands to deploy the opentelemetry-kube-stack
Helm chart, using the appropriate values file:
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
helm repo update
helm upgrade --install --namespace opentelemetry-operator-system opentelemetry-kube-stack open-telemetry/opentelemetry-kube-stack \
--values 'https://raw.githubusercontent.com/elastic/opentelemetry/refs/heads/8.16/resources/kubernetes/operator/helm/values.yaml' \
--version 0.3.3
Regardless of the installation method followed, perform the following checks to verify that everything is running properly:
__logs-*__
data view.__metrics-*__
data view.To enable auto-instrumentation, add the corresponding annotation to the pods of existing deployments (spec.template.metadata.annotations
), or to the desired namespace (to auto-instrument all pods in the namespace):
metadata:
annotations:
instrumentation.opentelemetry.io/inject-<LANGUAGE>: "opentelemetry-operator-system/elastic-instrumentation"
where <LANGUAGE>
is one of: go
, java
, nodejs
, python
, dotnet
For detailed instructions and examples on how to instrument applications in Kubernetes using the OpenTelemetry Operator, refer to Instrumenting applications.
For troubleshooting details and verification steps, refer to Troubleshooting auto-instrumentation.
[!NOTE] Before upgrading or updating the release configuration refer to compatibility matrix for a list of supported versions and customizing configuration for a list of supported configurable parameters.
To upgrade an installed release, run a helm upgrade
command providing the desired chart version and using the correct values.yaml
for your environment. For example:
helm repo update open-telemetry # update information of available charts locally
helm search repo open-telemetry/opentelemetry-kube-stack --versions # list available versions of the chart
helm upgrade --namespace opentelemetry-operator-system opentelemetry-kube-stack open-telemetry/opentelemetry-kube-stack \
--values 'https://raw.githubusercontent.com/elastic/opentelemetry/refs/heads/8.16/resources/kubernetes/operator/helm/values.yaml' \
--version 0.3.3
If cert-manager integration is disabled, Helm generates a new self-signed TLS certificate with every update, even if there are no actual changes to apply.
To customize the installation parameters, change the configuration values provided in values.yaml
file, or override them using --set parameter=value
during the installation.
To update an installed release, run a helm upgrade
with the updated values.yaml
file. Depending on the changes, some Pods may need to be restarted for the updates to take effect. Refer to upgrades for a command example.
The following table lists common parameters that might be relevant for your use case:
values.yaml parameter |
Description |
---|---|
clusterName |
Sets the k8s.cluster.name field in all collected data. The cluster name is automatically detected for EKS/GKE/AKS clusters, but it might be useful for other environments. When monitoring multiple Kubernetes clusters, ensure that the cluster name is properly set in all your environments.Refer to the resourcedetection processor for more details about cluster name detection. |
collectors.cluster.resources |
Configures CPU and memory requests and limits applied to the Deployment EDOT Collector responsible for cluster-level metrics.This setting follows the standard Kubernetes resources syntax for specifying requests and limits. |
collectors.daemon.resources |
Configures CPU and memory requests and limits applied to the DaemonSet EDOT Collector responsible for node-level metrics and application traces.This setting follows the standard Kubernetes resources syntax for specifying requests and limits. |
certManager.enabled |
Defaults to false .Refer to cert-manager integrated installation for more details. |
For more information on all available parameters and their meaning, refer to:
values.yaml
, which includes the default settings for the EDOT installation.kube-stack
Helm chart values file, with explanations of all parameters.In Kubernetes, for the API server to communicate with the webhook component (created by the operator), the webhook requires a TLS certificate that the API server is configured to trust. The default provided configuration sets the Helm chart to auto generate the required certificate as a self-signed certificate with an expiration policy of 365 days. These certificates won’t be renewed if the Helm chart’s release is not manually updated. For production environments, we highly recommend using a certificate manager like cert-manager.
Integrating the operator with cert-manager enables automatic generation and renewal of the TLS certificate. This section assumes that cert-manager and its CRDs are already installed in your Kubernetes environment. If that’s not the case, refer to the cert-manager installation guide before continuing.
Follow any of the following options to install the opentelemetry-kube-stack
Helm chart integrated with cert-manager
:
--set opentelemetry-operator.admissionWebhooks.certManager.enabled=true --set opentelemetry-operator.admissionWebhooks.autoGenerateCert=null
to the installation command. For example:helm upgrade --install --namespace opentelemetry-operator-system opentelemetry-kube-stack open-telemetry/opentelemetry-kube-stack \
--values ./resources/kubernetes/operator/helm/values.yaml --version 0.3.3 \
--set opentelemetry-operator.admissionWebhooks.certManager.enabled=true --set opentelemetry-operator.admissionWebhooks.autoGenerateCert=null
Keep an updated copy of the values.yaml
file by following these steps:
Update the values.yaml
file with the following changes:
Enable cert-manager integration for admission webhooks.
opentelemetry-operator:
admissionWebhooks:
certManager:
enabled: true # Change from `false` to `true`
Remove the generation of a self-signed certificate.
# Remove the following lines:
autoGenerateCert:
enabled: true
recreate: true
Run the installation (or upgrade) command pointing to the updated file. For example, assuming that the updated file has been saved as values_cert-manager.yaml
:
helm upgrade --install --namespace opentelemetry-operator-system opentelemetry-kube-stack open-telemetry/opentelemetry-kube-stack \
--values ./resources/kubernetes/operator/helm/values_cert-manager.yaml --version 0.3.3