Skip to content

Commit d29206f

Browse files
authored
Updates links pointing to ecs (#805)
## Description Related to #673 This PR updates the links that point to the `ecs` repo from `asciidocalypse://docs/ecs/docs/reference/` to `ecs://reference/`.
1 parent ae0f58e commit d29206f

32 files changed

+90
-90
lines changed

deploy-manage/monitor/logging-configuration/kibana-log-settings-examples.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ logging:
3030
3131
## Log in JSON format [log-in-json-ECS-example]
3232
33-
Log the default log format to JSON layout instead of pattern (the default). With `json` layout, log messages will be formatted as JSON strings in [ECS format](asciidocalypse://docs/ecs/docs/reference/index.md) that includes a timestamp, log level, logger, message text and any other metadata that may be associated with the log message itself.
33+
Log the default log format to JSON layout instead of pattern (the default). With `json` layout, log messages will be formatted as JSON strings in [ECS format](ecs://reference/index.md) that includes a timestamp, log level, logger, message text and any other metadata that may be associated with the log message itself.
3434

3535
```yaml
3636
logging:

deploy-manage/monitor/logging-configuration/kibana-logging.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -99,7 +99,7 @@ The pattern layout also offers a `highlight` option that allows you to highlight
9999

100100
### JSON layout [json-layout]
101101

102-
With `json` layout log messages will be formatted as JSON strings in [ECS format](asciidocalypse://docs/ecs/docs/reference/index.md) that includes a timestamp, log level, logger, message text and any other metadata that may be associated with the log message itself.
102+
With `json` layout log messages will be formatted as JSON strings in [ECS format](ecs://reference/index.md) that includes a timestamp, log level, logger, message text and any other metadata that may be associated with the log message itself.
103103

104104

105105
## Logger hierarchy [logger-hierarchy]

deploy-manage/production-guidance.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ This section provides some best practices for managing your data to help you set
1313

1414
* Build a [data architecture](/manage-data/lifecycle/data-tiers.md) that best fits your needs. Your {{ech}} deployment comes with default hot tier {{es}} nodes that store your most frequently accessed data. Based on your own access and retention policies, you can add warm, cold, frozen data tiers, and automated deletion of old data.
1515
* Make your data [highly available](/deploy-manage/tools.md) for production environments or otherwise critical data stores, and take regular [backup snapshots](tools/snapshot-and-restore.md).
16-
* Normalize event data to better analyze, visualize, and correlate your events by adopting the [Elastic Common Schema](asciidocalypse://docs/ecs/docs/reference/ecs-getting-started.md) (ECS). Elastic integrations use ECS out-of-the-box. If you are writing your own integrations, ECS is recommended.
16+
* Normalize event data to better analyze, visualize, and correlate your events by adopting the [Elastic Common Schema](ecs://reference/ecs-getting-started.md) (ECS). Elastic integrations use ECS out-of-the-box. If you are writing your own integrations, ECS is recommended.
1717

1818

1919
## Optimize data storage and retention [ec_optimize_data_storage_and_retention]

explore-analyze/machine-learning/anomaly-detection/ml-configuring-categories.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@ Another advanced option is the `categorization_filters` property, which can cont
8484

8585
## Per-partition categorization [ml-per-partition-categorization]
8686

87-
If you enable per-partition categorization, categories are determined independently for each partition. For example, if your data includes messages from multiple types of logs from different applications, you can use a field like the ECS [`event.dataset` field](asciidocalypse://docs/ecs/docs/reference/ecs-event.md) as the `partition_field_name` and categorize the messages for each type of log separately.
87+
If you enable per-partition categorization, categories are determined independently for each partition. For example, if your data includes messages from multiple types of logs from different applications, you can use a field like the ECS [`event.dataset` field](ecs://reference/ecs-event.md) as the `partition_field_name` and categorize the messages for each type of log separately.
8888

8989
If your job has multiple detectors, every detector that uses the `mlcategory` keyword must also define a `partition_field_name`. You must use the same `partition_field_name` value in all of these detectors. Otherwise, when you create or update a job and enable per-partition categorization, it fails.
9090

explore-analyze/transforms/transform-checkpoints.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ If the cluster experiences unsuitable performance degradation due to the {{trans
3939

4040
## Using the ingest timestamp for syncing the {{transform}} [sync-field-ingest-timestamp]
4141

42-
In most cases, it is strongly recommended to use the ingest timestamp of the source indices for syncing the {{transform}}. This is the most optimal way for {{transforms}} to be able to identify new changes. If your data source follows the [ECS standard](asciidocalypse://docs/ecs/docs/reference/index.md), you might already have an [`event.ingested`](asciidocalypse://docs/ecs/docs/reference/ecs-event.md#field-event-ingested) field. In this case, use `event.ingested` as the `sync`.`time`.`field` property of your {{transform}}.
42+
In most cases, it is strongly recommended to use the ingest timestamp of the source indices for syncing the {{transform}}. This is the most optimal way for {{transforms}} to be able to identify new changes. If your data source follows the [ECS standard](ecs://reference/index.md), you might already have an [`event.ingested`](ecs://reference/ecs-event.md#field-event-ingested) field. In this case, use `event.ingested` as the `sync`.`time`.`field` property of your {{transform}}.
4343

4444
If you don’t have a `event.ingested` field or it isn’t populated, you can set it by using an ingest pipeline. Create an ingest pipeline either using the [ingest pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-put-pipeline) (like the example below) or via {{kib}} under **Stack Management > Ingest Pipelines**. Use a [`set` processor](elasticsearch://reference/ingestion-tools/enrich-processor/set-processor.md) to set the field and associate it with the value of the ingest timestamp.
4545

manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -115,7 +115,7 @@ In this step, you’ll create a Python script that generates logs in JSON format
115115

116116
Having your logs written in a JSON format with ECS fields allows for easy parsing and analysis, and for standardization with other applications. A standard, easily parsible format becomes increasingly important as the volume and type of data captured in your logs expands over time.
117117

118-
Together with the standard fields included for each log entry is an extra *http.request.body.content* field. This extra field is there just to give you some additional, interesting data to work with, and also to demonstrate how you can add optional fields to your log data. Check the [ECS Field Reference](asciidocalypse://docs/ecs/docs/reference/ecs-field-reference.md) for the full list of available fields.
118+
Together with the standard fields included for each log entry is an extra *http.request.body.content* field. This extra field is there just to give you some additional, interesting data to work with, and also to demonstrate how you can add optional fields to your log data. Check the [ECS Field Reference](ecs://reference/ecs-field-reference.md) for the full list of available fields.
119119

120120
2. Let’s give the Python script a test run. Open a terminal instance in the location where you saved *elvis.py* and run the following:
121121

manage-data/ingest/transform-enrich/ingest-pipelines-serverless.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ In **{{project-settings}} → {{manage-app}} → {{ingest-pipelines-app}}**, you
3333

3434
To create a pipeline, click **Create pipeline → New pipeline**. For an example tutorial, see [Example: Parse logs](example-parse-logs.md).
3535

36-
The **New pipeline from CSV** option lets you use a file with comma-separated values (CSV) to create an ingest pipeline that maps custom data to the Elastic Common Schema (ECS). Mapping your custom data to ECS makes the data easier to search and lets you reuse visualizations from other data sets. To get started, check [Map custom data to ECS](asciidocalypse://docs/ecs/docs/reference/ecs-converting.md).
36+
The **New pipeline from CSV** option lets you use a file with comma-separated values (CSV) to create an ingest pipeline that maps custom data to the Elastic Common Schema (ECS). Mapping your custom data to ECS makes the data easier to search and lets you reuse visualizations from other data sets. To get started, check [Map custom data to ECS](ecs://reference/ecs-converting.md).
3737

3838

3939
## Test pipelines [ingest-pipelines-test-pipelines]

manage-data/ingest/transform-enrich/ingest-pipelines.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ In {{kib}}, open the main menu and click **Stack Management > Ingest Pipelines**
4545
To create a pipeline, click **Create pipeline > New pipeline**. For an example tutorial, see [Example: Parse logs](example-parse-logs.md).
4646

4747
::::{tip}
48-
The **New pipeline from CSV** option lets you use a CSV to create an ingest pipeline that maps custom data to the [Elastic Common Schema (ECS)](https://www.elastic.co/guide/en/ecs/current). Mapping your custom data to ECS makes the data easier to search and lets you reuse visualizations from other datasets. To get started, check [Map custom data to ECS](asciidocalypse://docs/ecs/docs/reference/ecs-converting.md).
48+
The **New pipeline from CSV** option lets you use a CSV to create an ingest pipeline that maps custom data to the [Elastic Common Schema (ECS)](https://www.elastic.co/guide/en/ecs/current). Mapping your custom data to ECS makes the data easier to search and lets you reuse visualizations from other datasets. To get started, check [Map custom data to ECS](ecs://reference/ecs-converting.md).
4949
::::
5050

5151

raw-migrated-files/docs-content/serverless/observability-plaintext-application-logs.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -257,7 +257,7 @@ Also, refer to [{{filebeat}} and systemd](asciidocalypse://docs/beats/docs/refer
257257

258258
#### Step 5: Parse logs with an ingest pipeline [observability-plaintext-application-logs-step-5-parse-logs-with-an-ingest-pipeline]
259259

260-
Use an ingest pipeline to parse the contents of your logs into structured, [Elastic Common Schema (ECS)](asciidocalypse://docs/ecs/docs/reference/index.md)-compatible fields.
260+
Use an ingest pipeline to parse the contents of your logs into structured, [Elastic Common Schema (ECS)](ecs://reference/index.md)-compatible fields.
261261

262262
Create an ingest pipeline with a [dissect processor](elasticsearch://reference/ingestion-tools/enrich-processor/dissect-processor.md) to extract structured ECS fields from your log messages. In your project, go to **Developer Tools** and use a command similar to the following example:
263263

@@ -279,7 +279,7 @@ PUT _ingest/pipeline/filebeat* <1>
279279
1. `_ingest/pipeline/filebeat*`: The name of the pipeline. Update the pipeline name to match the name of your data stream. For more information, refer to [Data stream naming scheme](/reference/ingestion-tools/fleet/data-streams.md#data-streams-naming-scheme).
280280
2. `processors.dissect`: Adds a [dissect processor](elasticsearch://reference/ingestion-tools/enrich-processor/dissect-processor.md) to extract structured fields from your log message.
281281
3. `field`: The field you’re extracting data from, `message` in this case.
282-
4. `pattern`: The pattern of the elements in your log data. The pattern varies depending on your log format. `%{@timestamp}`, `%{log.level}`, `%{host.ip}`, and `%{{message}}` are common [ECS](asciidocalypse://docs/ecs/docs/reference/index.md) fields. This pattern would match a log file in this format: `2023-11-07T09:39:01.012Z ERROR 192.168.1.110 Server hardware failure detected.`
282+
4. `pattern`: The pattern of the elements in your log data. The pattern varies depending on your log format. `%{@timestamp}`, `%{log.level}`, `%{host.ip}`, and `%{{message}}` are common [ECS](ecs://reference/index.md) fields. This pattern would match a log file in this format: `2023-11-07T09:39:01.012Z ERROR 192.168.1.110 Server hardware failure detected.`
283283

284284

285285
Refer to [Extract structured fields](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-extract-structured-fields) for more on using ingest pipelines to parse your log data.
@@ -338,7 +338,7 @@ You can add additional settings to the integration under **Custom log file** by
338338

339339
#### Step 2: Add an ingest pipeline to your integration [observability-plaintext-application-logs-step-2-add-an-ingest-pipeline-to-your-integration]
340340

341-
To aggregate or search for information in plaintext logs, use an ingest pipeline with your integration to parse the contents of your logs into structured, [Elastic Common Schema (ECS)](asciidocalypse://docs/ecs/docs/reference/index.md)-compatible fields.
341+
To aggregate or search for information in plaintext logs, use an ingest pipeline with your integration to parse the contents of your logs into structured, [Elastic Common Schema (ECS)](ecs://reference/index.md)-compatible fields.
342342

343343
1. From the custom logs integration, select **Integration policies** tab.
344344
2. Select the integration policy you created in the previous section.
@@ -364,7 +364,7 @@ To aggregate or search for information in plaintext logs, use an ingest pipeline
364364

365365
1. `processors.dissect`: Adds a [dissect processor](elasticsearch://reference/ingestion-tools/enrich-processor/dissect-processor.md) to extract structured fields from your log message.
366366
2. `field`: The field you’re extracting data from, `message` in this case.
367-
3. `pattern`: The pattern of the elements in your log data. The pattern varies depending on your log format. `%{@timestamp}`, `%{log.level}`, `%{host.ip}`, and `%{{message}}` are common [ECS](asciidocalypse://docs/ecs/docs/reference/index.md) fields. This pattern would match a log file in this format: `2023-11-07T09:39:01.012Z ERROR 192.168.1.110 Server hardware failure detected.`
367+
3. `pattern`: The pattern of the elements in your log data. The pattern varies depending on your log format. `%{@timestamp}`, `%{log.level}`, `%{host.ip}`, and `%{{message}}` are common [ECS](ecs://reference/index.md) fields. This pattern would match a log file in this format: `2023-11-07T09:39:01.012Z ERROR 192.168.1.110 Server hardware failure detected.`
368368

369369
6. Click **Create pipeline**.
370370
7. Save and deploy your integration.

reference/ecs.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,6 @@ navigation_title: ECS
44
# Elastic Common Schema
55

66
Elastic Common Schema (ECS) defines a common set of fields for ingesting data into Elasticsearch.
7-
For field details and usage information, refer to [](asciidocalypse://docs/ecs/docs/reference/index.md).
7+
For field details and usage information, refer to [](ecs://reference/index.md).
88

9-
ECS loggers are plugins for your favorite logging libraries, which help you to format your logs into ECS-compatible JSON. Check out [](asciidocalypse://docs/ecs/docs/reference/intro.md).
9+
ECS loggers are plugins for your favorite logging libraries, which help you to format your logs into ECS-compatible JSON. Check out [](ecs://reference/index.md).

reference/ingestion-tools/fleet/kafka-output-settings.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ Use this option to set the Kafka topic for each {{agent}} event.
5151

5252
| | |
5353
| --- | --- |
54-
| $$$kafka-output-topics-default$$$<br>**Default topic**<br> | Set a default topic to use for events sent by {{agent}} to the Kafka output.<br><br>You can set a static topic, for example `elastic-agent`, or you can choose to set a topic dynamically based on an [Elastic Common Scheme (ECS)][Elastic Common Schema (ECS)](asciidocalypse://docs/ecs/docs/reference/index.md)) field. Available fields include:<br><br>* `data_stream_type`<br>* `data_stream.dataset`<br>* `data_stream.namespace`<br>* `@timestamp`<br>* `event-dataset`<br><br>You can also set a custom field. This is useful if you’re using the [`add_fields` processor](/reference/ingestion-tools/fleet/add_fields-processor.md) as part of your {{agent}} input. Otherwise, setting a custom field is not recommended.<br> |
54+
| $$$kafka-output-topics-default$$$<br>**Default topic**<br> | Set a default topic to use for events sent by {{agent}} to the Kafka output.<br><br>You can set a static topic, for example `elastic-agent`, or you can choose to set a topic dynamically based on an [Elastic Common Scheme (ECS)][Elastic Common Schema (ECS)](ecs://reference/index.md)) field. Available fields include:<br><br>* `data_stream_type`<br>* `data_stream.dataset`<br>* `data_stream.namespace`<br>* `@timestamp`<br>* `event-dataset`<br><br>You can also set a custom field. This is useful if you’re using the [`add_fields` processor](/reference/ingestion-tools/fleet/add_fields-processor.md) as part of your {{agent}} input. Otherwise, setting a custom field is not recommended.<br> |
5555

5656

5757
### Header settings [_header_settings]

reference/observability/fields-and-object-schemas.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ This section lists Elastic Common Schema (ECS) fields the Logs and Infrastructur
99

1010
ECS is an open source specification that defines a standard set of fields to use when storing event data in {{es}}, such as logs and metrics.
1111

12-
Beat modules (for example, [{{filebeat}} modules](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-modules.md)) are ECS-compliant, so manual field mapping is not required, and all data is populated automatically in the Logs and Infrastructure apps. If you cannot use {{beats}}, map your data to [ECS fields](asciidocalypse://docs/ecs/docs/reference/ecs-converting.md)). You can also try using the experimental [ECS Mapper](https://github.com/elastic/ecs-mapper) tool.
12+
Beat modules (for example, [{{filebeat}} modules](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-modules.md)) are ECS-compliant, so manual field mapping is not required, and all data is populated automatically in the Logs and Infrastructure apps. If you cannot use {{beats}}, map your data to [ECS fields](ecs://reference/ecs-converting.md)). You can also try using the experimental [ECS Mapper](https://github.com/elastic/ecs-mapper) tool.
1313

1414
This reference covers:
1515

reference/observability/fields-and-object-schemas/logs-app-fields.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ mapped_pages:
55

66
# Logs Explorer fields [logs-app-fields]
77

8-
This section lists the required fields the **Logs Explorer** uses to display data. Please note that some of the fields listed are not [ECS fields](asciidocalypse://docs/ecs/docs/reference/index.md#_what_is_ecs).
8+
This section lists the required fields the **Logs Explorer** uses to display data. Please note that some of the fields listed are not [ECS fields](ecs://reference/index.md#_what_is_ecs).
99

1010
`@timestamp`
1111
: Date/time when the event originated.

reference/observability/fields-and-object-schemas/metrics-app-fields.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ mapped_pages:
55

66
# Infrastructure app fields [metrics-app-fields]
77

8-
This section lists the required fields the {{infrastructure-app}} uses to display data. Please note that some of the fields listed are not [ECS fields](asciidocalypse://docs/ecs/docs/reference/index.md#_what_is_ecs).
8+
This section lists the required fields the {{infrastructure-app}} uses to display data. Please note that some of the fields listed are not [ECS fields](ecs://reference/index.md#_what_is_ecs).
99

1010

1111
## Additional field details [_additional_field_details]

reference/observability/serverless/infrastructure-app-fields.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ mapped_pages:
55

66
# Infrastructure app fields [observability-infrastructure-monitoring-required-fields]
77

8-
This section lists the fields the Infrastructure UI uses to display data. Please note that some of the fields listed here are not [ECS fields](asciidocalypse://docs/ecs/docs/reference/index.md#_what_is_ecs).
8+
This section lists the fields the Infrastructure UI uses to display data. Please note that some of the fields listed here are not [ECS fields](ecs://reference/index.md#_what_is_ecs).
99

1010

1111
## Additional field details [observability-infrastructure-monitoring-required-fields-additional-field-details]

0 commit comments

Comments
 (0)