You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc):
17
-
18
-
$$$enable-knowledge-base$$$
19
-
20
-
$$$knowledge-base-add-knowledge-index$$$
21
-
22
9
AI Assistant’s Knowledge Base feature enables AI Assistant to recall specific documents and other specified information. This information, which can include everything from the location of your datacenters to the latest threat research, provides additional context that can improve the quality of AI Assistant’s responses to your queries. This topic describes how to enable and add information to Knowledge Base.
23
10
24
11
::::{note}
25
-
When you upgrade from {{elastic-sec}} version 8.15 to a newer version, information previously stored by AI Assistant will be lost.
12
+
{{stack}} users: when you upgrade from {{elastic-sec}} version 8.15 to a newer version, information previously stored by AI Assistant will be lost.
26
13
::::
27
14
28
15
29
16
::::{admonition} Requirements
30
-
* To use Knowledge Base, you need the `Elastic AI Assistant: All` privilege. To edit global Knowledge Base entries (information that will affect the AI Assistant experience for other users in the {{kib}} space), you need the `Allow Changes to Global Entries` privilege.
17
+
* To use Knowledge Base, the `Elastic AI Assistant: All` privilege.
18
+
* To edit global Knowledge Base entries (information that will affect the AI Assistant experience for other users in the {{kib}} space), the `Allow Changes to Global Entries` privilege.
31
19
* You must [enable machine learning](/solutions/security/advanced-entity-analytics/machine-learning-job-rule-requirements.md) with a minimum ML node size of 4 GB.
32
20
33
21
::::
@@ -114,20 +102,9 @@ Add an individual document to Knowledge Base when you want AI Assistant to remem
114
102
5. In the **Markdown text** field, enter the information you want AI Assistant to remember.
115
103
6. If it should be **Required knowledge**, select the option. Otherwise, leave it blank. Alternatively, you can simply send a message to AI Assistant that instructs it to "Remember" the information. For example, "Remember that I changed my password today, October 24, 2024", or "Remember we always use the Threat Hunting Timeline template when investigating potential threats". Entries created in this way are private to you. By default they are not required knowledge, but you can make them required by instructing AI Assistant to "Always remember", for example "Always remember to address me as madam", or "Always remember that our primary data center is located in Austin, Texas".
116
104
117
-
Refer to the following video for an example of adding a document to Knowledge Base from the settings menu.
Refer to the following video for an example of adding an index to Knowledge Base (click to play video).
135
+
136
+
137
+
[](https://videos.elastic.co/watch/Q5CjXMN4R2GYLGLUy5P177?)
171
138
172
139
173
140
@@ -185,23 +152,22 @@ First, you’ll need to set up a web crawler to add the desired data to an index
185
152
1. From the **Search** section of {{kib}}, find **Web crawlers** in the navigation menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md).
186
153
2. Click **New web crawler**.
187
154
188
-
1. Under **Index name**, name the index where the data from your new web crawler will be stored, for example `threat_intelligence_feed_1`. Click **Create index**.
189
-
2. Under **Domain URL**, enter the URL where the web crawler should collect data. Click **Validate Domain** to test it, then **Add domain**.
155
+
* Under **Index name**, name the index where the data from your new web crawler will be stored, for example`threat_intelligence_feed_1`. Click **Create index**.
156
+
* Under **Domain URL**, enter the URL where the web crawler should collect data. Click **Validate Domain** to test it, then **Add domain**.
190
157
191
158
3. The previous step opens a page with the details of your new index. Go to its **Mappings** tab, then click **Add field**.
192
159
193
-
::::{note}
194
-
Remember, each index added to Knowledge Base must have at least one semantic text field.
195
-
::::
196
-
160
+
::::{note}
161
+
Remember, each index added to Knowledge Base must have at least one semantic text field.
162
+
::::
197
163
198
-
1. Under **Field type**, select `Semantic text`. Under **Select an inference endpoint***, select `elastic-security-ai-assistant-elser2`. Click ***Add field**, then **Save mapping**.
164
+
* Under **Field type**, select `Semantic text`. Under **Select an inference endpoint**, select `elastic-security-ai-assistant-elser2`. Click **Add field**, then **Save mapping**.
199
165
200
166
4. Go to the **Scheduling** tab. Enable the **Enable recurring crawls with the following schedule** setting, and define your desired schedule.
201
167
5. Go to the **Manage Domains** tab. Select the domain associated with your new web crawler, then go the its **Crawl rules** tab and click **Add crawl rule**. For more information, refer to [Web crawler content extraction rules](https://www.elastic.co/guide/en/enterprise-search/current/crawler-extraction-rules.html).
202
168
203
-
1. Click **Add crawl rule** again. Under **Policy***, select `Disallow`. Under ***Rule***, select `Regex`. Under ***Path pattern**, enter `.*`. Click **Save**.
204
-
2. Under **Policy**, select `Allow`. Under **Rule***, select `Contains`. Under ***Path pattern**, enter your path pattern, for example `threat-intelligence`. Click **Save**. Make sure this rule appears below the rule created in the previous step on the list.
169
+
1. Click **Add crawl rule** again. Under **Policy**, select `Disallow`. Under **Rule**, select `Regex`. Under **Path pattern**, enter `.*`. Click **Save**.
170
+
2. Under **Policy**, select `Allow`. Under **Rule**, select `Contains`. Under **Path pattern**, enter your path pattern, for example `threat-intelligence`. Click **Save**. Make sure this rule appears below the rule created in the previous step on the list.
205
171
3. Click **Crawl**, then **Crawl all domains on this index**. A success message appears. The crawl process will take longer for larger data sources. Once it finishes, your new web crawler’s index will contain documents provided by the crawler.
206
172
207
173
6. Finally, follow the instructions to [add an index to Knowledge Base](/solutions/security/ai/ai-assistant-knowledge-base.md#knowledge-base-add-knowledge-index). Add the index that contains the data from your new web crawler (`threat_intelligence_feed_1` in this example).
@@ -210,15 +176,4 @@ Your new threat intelligence data is now included in Knowledge Base and can info
210
176
211
177
Refer to the following video for an example of creating a web crawler to ingest threat intelligence data and adding it to Knowledge Base.
% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc):
17
-
18
-
$$$configure-ai-assistant$$$
19
-
20
-
$$$ai-assistant-anonymization$$$
21
-
22
9
The Elastic AI Assistant utilizes generative AI to bolster your cybersecurity operations team. It allows users to interact with {{elastic-sec}} for tasks such as alert investigation, incident response, and query generation or conversion using natural language and much more.
@@ -32,11 +19,12 @@ The Elastic AI Assistant is designed to enhance your analysis with smart dialogu
32
19
33
20
34
21
::::{admonition} Requirements
35
-
* The Elastic AI Assistant and Generative AI connector are available in {{stack}} versions 8.8.1 and later. The Generative AI connector is renamed to OpenAI connector in 8.11.0.
36
-
* This feature requires an [Enterprise subscription](https://www.elastic.co/pricing).
37
-
* To use AI Assistant, you need at least the **Elastic AI Assistant : All** and **Actions and Connectors : Read**[privileges](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md).
38
-
* To set up AI Assistant, you need the **Actions and Connectors : All**[privilege](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md).
39
-
* You need a [generative AI connector](/solutions/security/ai/set-up-connectors-for-large-language-models-llm.md), which AI Assistant uses to generate responses.
22
+
* {{stack}} users: {{stack}} version 8.8.1 or later. Also note the Generative AI connector was renamed to OpenAI connector in 8.11.0.
23
+
* {{stack}} users: an [Enterprise subscription](https://www.elastic.co/pricing).
24
+
* {{serverless-short}} users: a [Security Analytics Complete subscription](/deploy-manage/deploy/elastic-cloud/project-settings.md).
25
+
* To use AI Assistant, the **Elastic AI Assistant : All** and **Actions and Connectors : Read**[privileges](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md).
26
+
* To set up AI Assistant, the **Actions and Connectors : All**[privilege](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md).
27
+
* A [generative AI connector](/solutions/security/ai/set-up-connectors-for-large-language-models-llm.md), which AI Assistant uses to generate responses.
40
28
41
29
::::
42
30
@@ -148,7 +136,7 @@ To modify Anonymization settings, you need the **Elastic AI Assistant: All** pri
148
136
The **Anonymization** tab of the Security AI settings menu allows you to define default data anonymization behavior for events you send to AI Assistant. Fields with **Allowed*** toggled on are included in events provided to AI Assistant. ***Allowed*** fields with ***Anonymized** set to **Yes** are included, but with their values obfuscated.
149
137
150
138
::::{note}
151
-
You can access anonymization settings directly from the **Attack Discovery** page by clicking the settings () button next to the model selection dropdown menu.
139
+
You can access anonymization settings directly from the **Attack Discovery** page by clicking the settings () button next to the model selection dropdown menu.
:alt: The Integrations page with the Create new integration button highlighted
53
+
:::
53
54
54
55
3. Click **Create integration**.
55
56
4. Select an [LLM connector](/solutions/security/ai/set-up-connectors-for-large-language-models-llm.md).
56
-
5. Define how your new integration will appear on the Integrations page by providing a **Title**, **Description***, and ***Logo**. Click **Next**.
57
+
5. Define how your new integration will appear on the Integrations page by providing a **Title**, **Description**, and **Logo**. Click **Next**.
57
58
6. Define your integration’s package name, which will prefix the imported event fields.
58
59
7. Define your **Data stream title**, **Data stream description**, and **Data stream name**. These fields appear on the integration’s configuration page to help identify the data stream it writes to.
59
60
8. Select your [**Data collection method**](asciidocalypse://docs/beats/docs/reference/filebeat/configuration-filebeat-options.md). This determines how your new integration will ingest the data (for example, from an S3 bucket, an HTTP endpoint, or a file stream).
60
61
61
-
::::{admonition} Importing CEL data
62
-
:class: note
63
-
64
-
If you select **API (CEL input)**, you’ll have the additional option to upload the API’s OAS file here. After you do, the LLM will use it to determine which API endpoints (GET only), query parameters, and data structures to use in the new custom integration. You will then select which API endpoints to consume and your authentication method before uploading your sample data.
65
-
66
-
::::
62
+
::::{admonition} Importing CEL data
63
+
:class: note
64
+
If you select **API (CEL input)**, you’ll have the additional option to upload the API’s OAS file here. After you do, the LLM will use it to determine which API endpoints (GET only), query parameters, and data structures to use in the new custom integration. You will then select which API endpoints to consume and your authentication method before uploading your sample data.
65
+
::::
67
66
68
67
9. Upload a sample of your data. Make sure to include all the types of events that you want the new integration to handle.
69
68
70
-
::::{admonition} Best practices for sample data
71
-
* For JSON and NDJSON samples, each object in your sample should represent an event, and you should avoid deeply nested object structures.
72
-
* The more variety in your sample, the more accurate the pipeline will be. Include a wide range of unique log entries instead of just repeating the same type of entry. Automatic Import will select up to 100 different events from your sample to use as the basis for the new integration.
73
-
* Ideally, each field name should describe what the field does.
74
-
75
-
::::
69
+
::::{admonition} Best practices for sample data
70
+
* For JSON and NDJSON samples, each object in your sample should represent an event, and you should avoid deeply nested object structures.
71
+
* The more variety in your sample, the more accurate the pipeline will be. Include a wide range of unique log entries instead of just repeating the same type of entry. Automatic Import will select up to 100 different events from your sample to use as the basis for the new integration.
72
+
* Ideally, each field name should describe what the field does.
73
+
::::
76
74
77
75
10. Click **Analyze logs**, then wait for processing to complete. This may take several minutes.
78
76
11. After processing is complete, the pipeline’s field mappings appear, including ECS and custom fields.
0 commit comments