Skip to content

Clean up ech/ece troubleshooting content #925

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Mar 25, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion redirects.yml
Original file line number Diff line number Diff line change
Expand Up @@ -85,4 +85,6 @@ redirects:
'reference/ingestion-tools/fleet/migrate-from-beats-to-elastic-agent.md': 'reference/fleet/migrate-from-beats-to-elastic-agent.md'

## troubleshoot
'troubleshoot/deployments/cloud-enterprise/ask-for-help.md': 'troubleshoot/index.md'
'troubleshoot/deployments/cloud-enterprise/ask-for-help.md': 'troubleshoot/index.md'
'troubleshoot/deployments/serverless-status.md': 'troubleshoot/deployments/serverless.md'
'troubleshoot/deployments/esf/elastic-serverless-forwarder.md': 'troubleshoot/ingest/elastic-serverless-forwarder.md'
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
---
navigation_title: "Deployment health warnings"
applies_to:
deployment:
ece: all
mapped_pages:
- https://www.elastic.co/guide/en/cloud-enterprise/current/ece-deployment-no-op.html
---

# Troubleshoot deployment health warnings [ece-deployment-no-op]

The {{ece}} **Deployments** page shows the current status of your active deployments. From time to time you may get one or more health warnings, such as the following:

:::{image} /troubleshoot/images/cloud-ec-ce-deployment-health-warning.png
:alt: A screen capture of the deployment page showing a typical warning: Deployment health warning: Latest change to {{es}} configuration failed.
:::

**Single warning**

To resolve a single health warning, we recommended first running a _no-op_ (no operation) plan. This performs a rolling update on the components in your Elastic Cloud Enterprise deployment without actually applying any configuration changes. This is often all that’s needed to resolve a health warning on the UI.

To run a no-op plan:

1. [Log into the Cloud UI](https://www.elastic.co/guide/en/cloud-enterprise/current/ece-login.html).
2. Select a deployment.

Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters.

3. From your deployment menu, go to the **Edit** page.
4. Select **Save**.

**Multiple warnings**

If multiple health warnings appear for one of your deployments, check [](/troubleshoot/deployments/cloud-enterprise/common-issues.md) or [contact us](/troubleshoot/index.md#contact-us).

## Additional resources
* [Elastic Cloud Hosted deployment health warnings](/troubleshoot/monitoring/deployment-health-warnings.md)
* [Troubleshooting overview](/troubleshoot/index.md)
125 changes: 125 additions & 0 deletions troubleshoot/deployments/cloud-enterprise/node-bootlooping.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,125 @@
---
navigation_title: "Node bootlooping"
applies_to:
deployment:
ece: all
mapped_pages:
- https://www.elastic.co/guide/en/cloud-enterprise/current/ece-config-change-errors.html
---

# Troubleshoot node bootlooping in {{ece}} [ece-config-change-errors]

When you attempt to apply a configuration change to a deployment, the attempt may fail with an error indicating that the change could not be applied, and deployment resources may be unable to restart. In some cases, bootlooping may result, where the deployment resources cycle through a continual reboot process.

:::{image} /troubleshoot/images/cloud-ec-ce-configuration-change-failure.png
:alt: A screen capture of the deployment page showing an error: Latest change to {{es}} configuration failed.
:::

To confirm if your Elasticsearch cluster is bootlooping, you can check the most recent plan under your [Deployment Activity page](/deploy-manage/deploy/elastic-cloud/keep-track-of-deployment-activity.md) for the error:

```sh
Plan change failed: Some instances were unable to start properly.
```

Here are some frequent causes of a failed configuration change:

* [Secure settings](#ece-config-change-errors-secure-settings)
* [Expired custom plugins or bundles](#ece-config-change-errors-expired-bundle-extension)
* [OOM errors](#ece-config-change-errors-oom-errors)
* [Existing index](#ece-config-change-errors-existing-index)
* [Insufficient storage](#ece-config-change-errors-insufficient-storage)

If you’re unable to remediate the failing plan’s root cause, you can attempt to reset the deployment to the latest successful {{es}} configuration by performing a [no-op plan](/troubleshoot/monitoring/deployment-health-warnings.md).

## Secure settings [ece-config-change-errors-secure-settings]

The most frequent cause of a failed deployment configuration change is due to invalid or mislocated [secure settings](/deploy-manage/security/secure-settings.md).
The keystore allows you to safely store sensitive settings, such as passwords, as a key/value pair. You can then access a secret value from a settings file by referencing its key. Importantly, not all settings can be stored in the keystore, and the keystore does not validate the settings that you add. Adding unsupported settings can cause {{es}} or other components to fail to restart. To check whether a setting is supported in the keystore, look for a "Secure" qualifier in the [lists of reloadable settings](/deploy-manage/security/secure-settings.md).

The following sections detail some secure settings problems that can result in a configuration change error that can prevent a deployment from restarting. You might diagnose these plan failures via the logs or via their [related exit codes](/deploy-manage/maintenance/start-stop-services/start-stop-elasticsearch.md#fatal-errors) `1`, `3`, and `78`.


### Invalid or outdated values [ece-config-change-errors-old-values]

The keystore does not validate any settings that you add, so invalid or outdated values are a common source of errors when you apply a configuration change to a deployment.

To check the current set of stored settings:

1. Open the deployment **Security** page.
2. In the **{{es}} keystore** section, check the **Security keys** list. The list is shown only if you currently have settings configured in the keystore.

One frequent cause of errors is when settings in the keystore are no longer valid, such as when SAML settings are added for a test environment, but the settings are either not carried over or no longer valid in a production environment.


### Snapshot repositories [ece-config-change-errors-snapshot-repos]

Sometimes, settings added to the keystore to connect to a snapshot repository may not be valid. When this happens, you may get an error such as `SettingsException[Neither a secret key nor a shared access token was set.]`

For example, when adding an [Azure repository storage setting](/deploy-manage/tools/snapshot-and-restore/azure-repository.md#repository-azure-usage) such as `azure.client.default.account` to the keystore, the associated setting `azure.client.default.key` must also be added for the configuration to be valid.


### Third-party authentication [ece-config-change-errors-third-party-auth]

When you configure third-party authentication, it’s important that all required configuration elements that are stored in the keystore are included in the {{es}} user settings file. For example, when you [create a SAML realm](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md#saml-create-realm), omitting a field such as `idp.entity_id` when that setting is present in the keystore results in a failed configuration change.


### Wrong location [ece-config-change-errors-wrong-location]

In some cases, settings may accidentally be added to the keystore that should have been added to the [{{es}} user settings file](/deploy-manage/deploy/elastic-cloud/edit-stack-settings.md). It’s always a good idea to check the [lists of reloadable settings](/deploy-manage/security/secure-settings.md) to determine if a setting can be stored in the keystore. Settings that can safely be added to the keystore are flagged as `Secure`.


## Expired custom plugins or bundles [ece-config-change-errors-expired-bundle-extension]

During the process of applying a configuration change, {{ecloud}} checks to determine if any [uploaded custom plugins or bundles](/deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md) are expired.

Problematic plugins produce oscillating {{es}} start-up logs like the following:

```sh
Booting at Sun Sep 4 03:06:43 UTC 2022
Installing user plugins.
Installing elasticsearch-analysis-izumo-master-7.10.2-20210618-28f8a97...
/app/elasticsearch.sh: line 169: [: too many arguments
Booting at Sun Sep 4 03:06:58 UTC 2022
Installing user plugins.
Installing elasticsearch-analysis-izumo-master-7.10.2-20210618-28f8a97...
/app/elasticsearch.sh: line 169: [: too many arguments
```

Problematic bundles produce similar oscillations but their install log would appear like

```sh
2024-11-17 15:18:02 https://found-user-plugins.s3.amazonaws.com/XXXXX/XXXXX.zip?response-content-disposition=attachment%3Bfilename%XXXXX%2F4007535947.zip&x-elastic-extension-version=1574194077471&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20241016T133214Z&X-Amz-SignedHeaders=host&X-Amz-Expires=86400&XAmz-Credential=XXXXX%2F20201016%2Fus-east-1%2Fs3%2Faws4_request&X-AmzSignature=XXXXX
```

Noting in example that the bundle’s expiration `X-Amz-Date=20241016T133214Z` is before than the log timestamp `2024-11-17 15:18:02` so this bundle is considered expired.

To view any added plugins or bundles:

1. Go to the **Features** page and open the **Extensions** tab.
2. Select any extension and then choose **Update extension** to renew it. No other changes are needed, and any associated configuration change failures should now be able to succeed.


## OOM errors [ece-config-change-errors-oom-errors]

Configuration change errors can occur when there is insufficient RAM configured for a data tier. In this case, the cluster typically also shows OOM (out of memory) errors. To resolve these, you need to increase the amount of heap memory, which is half of the amount of memory allocated to a cluster. You might also detect OOM in plan changes via their [related exit codes](/deploy-manage/maintenance/start-stop-services/start-stop-elasticsearch.md#fatal-errors) `127`, `137`, and `158`.

You can also read our detailed blog [Managing and troubleshooting {{es}} memory](https://www.elastic.co/blog/managing-and-troubleshooting-elasticsearch-memory).


## Existing index [ece-config-change-errors-existing-index]

In rare cases, when you attempt to upgrade the version of a deployment and the upgrade fails on the first attempt, subsequent attempts to upgrade may fail due to already existing resources. The problem may be due to the system preventing itself from overwriting existing indices, resulting in an error such as this: `Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_2 and restarting Kibana`.

To resolve this:

1. Check that you don’t need the content.
2. Run an {{es}} [Delete index request](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-delete) to remove the existing index.

In this example, the `.kibana_2` index is the rollover of saved objects (such as Kibana visualizations or dashboards) from the original `.kibana_1` index. Since `.kibana_2` was created as part of the failed upgrade process, this index does not yet contain any pertinent data and it can safely be deleted.

3. Retry the deployment configuration change.


## Insufficient storage [ece-config-change-errors-insufficient-storage]

Configuration change errors can occur when there is insufficient disk space for a data tier. To resolve this, you need to increase the size of that tier to ensure it provides enough storage to accommodate the data in your cluster tier considering the [high watermark](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#disk-based-shard-allocation). For troubleshooting walkthrough, see [Fix watermark errors](/troubleshoot/elasticsearch/fix-watermark-errors.md).
31 changes: 0 additions & 31 deletions troubleshoot/deployments/serverless-status.md

This file was deleted.

30 changes: 25 additions & 5 deletions troubleshoot/deployments/serverless.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,35 @@
---
navigation_title: "Serverless"
navigation_title: "Serverless status"
applies_to:
serverless: all
mapped_pages:
- https://www.elastic.co/guide/en/serverless/current/general-serverless-status.html
---

# Troubleshoot {{serverless-full}}
# Check Serverless status and get updates [general-serverless-status]

Use the topics in this section to troubleshoot {{serverless-full}}:
Serverless projects run on cloud platforms, which may undergo changes in availability. When availability changes, Elastic makes sure to provide you with a current service status.

To check current and past service availability, go to the Elastic [service status](https://status.elastic.co/?section=serverless) page.


## Subscribe to updates [general-serverless-status-subscribe-to-updates]

You can be notified about changes to the service status automatically.

To receive service status updates:

1. Go to the Elastic [service status](https://status.elastic.co/?section=serverless) page.
2. Select **SUBSCRIBE TO UPDATES**.
3. You can be notified in the following ways:

* Email
* Slack
* Atom or RSS feeds


After you subscribe, you’ll be notified whenever a service status update is posted.

* [](/troubleshoot/deployments/serverless-status.md)
* [](/troubleshoot/deployments/esf/elastic-serverless-forwarder.md)

## Additional resources
[Troubleshooting overview](/troubleshoot/index.md)
7 changes: 1 addition & 6 deletions troubleshoot/ingest.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,14 +7,9 @@ applies_to:

# Troubleshoot ingestion tools

:::{admonition} WIP
⚠️ **This page is a work in progress.** ⚠️

The documentation team is working on restructuring this section.
:::

Use the topics in this section to troubleshoot ingestion tools:

* [](/troubleshoot/ingest/logstash.md)
* [](/troubleshoot/ingest/fleet/fleet-elastic-agent.md)
* [](/troubleshoot/ingest/beats-loggingplugin/elastic-logging-plugin-for-docker.md)
* [](/troubleshoot/ingest/elastic-serverless-forwarder.md)
12 changes: 7 additions & 5 deletions troubleshoot/monitoring/deployment-health-warnings.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,8 @@ navigation_title: "Deployment health warnings"
applies_to:
deployment:
ess: all
ece: all
mapped_pages:
- https://www.elastic.co/guide/en/cloud/current/ec-deployment-no-op.html
- https://www.elastic.co/guide/en/cloud-enterprise/current/ece-deployment-no-op.html
- https://www.elastic.co/guide/en/cloud-heroku/current/ech-deployment-no-op.html
---

Expand All @@ -18,13 +16,13 @@ The {{ecloud}} [Deployments](https://cloud.elastic.co/deployments) page shows th
:alt: A screen capture of the deployment page showing a typical warning: Deployment health warning: Latest change to {{es}} configuration failed.
:::

**Seeing only one warning?**
**Single warning**

To resolve a single health warning, we recommended first re-applying any pending changes: Select **Edit** in the deployment menu to open the Edit page and then click **Save** without making any changes. This will check all components for pending changes and will apply the changes as needed. This may impact the uptime of clusters which are not [highly available](/deploy-manage/production-guidance/availability-and-resilience/resilience-in-ech.md).

Re-saving the deployment configuration without making any changes is often all that’s needed to resolve a transient health warning on the UI. Saving will redirect you to the {{ech}} deployment [Activity page](/deploy-manage/deploy/elastic-cloud/keep-track-of-deployment-activity.md) where you can monitor plan completion. Repeat errors should be investigated; for more information refer to [resolving configuration change errors](/troubleshoot/monitoring/node-bootlooping.md).

**Seeing multiple warnings?**
**Multiple warnings**

If multiple health warnings appear for one of your deployments, or if your deployment is unhealthy, we recommend [Getting help](/troubleshoot/index.md) through the Elastic Support Portal.

Expand All @@ -34,4 +32,8 @@ If the warning refers to a system change, check the deployment’s [Activity](/d

:::{important}
If you’re using Elastic Cloud Hosted, then you can use AutoOps to monitor your cluster. AutoOps significantly simplifies cluster management with performance recommendations, resource utilization visibility, and real-time issue detection with resolution paths. For more information, refer to [Monitor with AutoOps](/deploy-manage/monitor/autoops.md).
:::
:::

## Additional resources
* [Elastic Cloud Enterprise deployment health warnings](/troubleshoot/deployments/cloud-enterprise/deployment-health-warnings.md)
* [Troubleshooting overview](/troubleshoot/index.md)
6 changes: 2 additions & 4 deletions troubleshoot/monitoring/node-bootlooping.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,12 @@ navigation_title: "Node bootlooping"
applies_to:
deployment:
ess: all
ece: all
mapped_pages:
- https://www.elastic.co/guide/en/cloud/current/ec-config-change-errors.html
- https://www.elastic.co/guide/en/cloud-enterprise/current/ece-config-change-errors.html
- https://www.elastic.co/guide/en/cloud-heroku/current/ech-config-change-errors.html
---

# Troubleshoot node bootlooping [ec-config-change-errors]
# Troubleshoot node bootlooping in {{ech}} [ec-config-change-errors]

When you attempt to apply a configuration change to a deployment, the attempt may fail with an error indicating that the change could not be applied, and deployment resources may be unable to restart. In some cases, bootlooping may result, where the deployment resources cycle through a continual reboot process.

Expand Down Expand Up @@ -131,7 +129,7 @@ Configuration change errors can occur when there is insufficient RAM configured

Check the [{{es}} cluster size](/deploy-manage/deploy/elastic-cloud/ec-customize-deployment-components.md#ec-cluster-size) and the [JVM memory pressure indicator](/deploy-manage/monitor/ec-memory-pressure.md) documentation to learn more.

As well, you can read our detailed blog [Managing and troubleshooting {{es}} memory](https://www.elastic.co/blog/managing-and-troubleshooting-elasticsearch-memory).
You can also read our detailed blog [Managing and troubleshooting {{es}} memory](https://www.elastic.co/blog/managing-and-troubleshooting-elasticsearch-memory).


## Existing index [ec-config-change-errors-existing-index]
Expand Down
Loading
Loading