Skip to content

Commit 34f41b6

Browse files
committed
Update content to Antelope and misc changes
1 parent ba4f72f commit 34f41b6

File tree

4 files changed

+33
-68
lines changed

4 files changed

+33
-68
lines changed

doc/source/configuration/wazuh.rst

Lines changed: 18 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -34,14 +34,14 @@ Provisioning an infra VM for Wazuh Manager.
3434
Kayobe supports :kayobe-doc:`provisioning infra VMs <deployment.html#infrastructure-vms>`.
3535
The following configuration may be used as a guide. Config for infra VMs is documented :kayobe-doc:`here <configuration/reference/infra-vms>`.
3636

37-
Add a Wazuh Manager host to the ``wazuh-manager`` group in ``etc/kayobe/inventory/hosts``.
37+
Add a Wazuh Manager host to the ``wazuh-manager`` group in ``$KAYOBE_CONFIG_PATH/inventory/hosts``.
3838

3939
.. code-block:: ini
4040
4141
[wazuh-manager]
4242
os-wazuh
4343
44-
Add the ``wazuh-manager`` group to the ``infra-vms`` group in ``etc/kayobe/inventory/groups``.
44+
Add the ``wazuh-manager`` group to the ``infra-vms`` group in ``$KAYOBE_CONFIG_PATH/inventory/groups``.
4545

4646
.. code-block:: ini
4747
@@ -50,7 +50,7 @@ Add the ``wazuh-manager`` group to the ``infra-vms`` group in ``etc/kayobe/inven
5050
[infra-vms:children]
5151
wazuh-manager
5252
53-
Define VM sizing in ``etc/kayobe/inventory/group_vars/wazuh-manager/infra-vms``:
53+
Define VM sizing in ``$KAYOBE_CONFIG_PATH/inventory/group_vars/wazuh-manager/infra-vms``:
5454

5555
.. code-block:: yaml
5656
@@ -64,7 +64,7 @@ Define VM sizing in ``etc/kayobe/inventory/group_vars/wazuh-manager/infra-vms``:
6464
# Capacity of the infra VM data volume.
6565
infra_vm_data_capacity: "200G"
6666
67-
Optional: define LVM volumes in ``etc/kayobe/inventory/group_vars/wazuh-manager/lvm``.
67+
Optional: define LVM volumes in ``$KAYOBE_CONFIG_PATH/inventory/group_vars/wazuh-manager/lvm``.
6868
``/var/ossec`` often requires greater storage space, and ``/var/lib/wazuh-indexer``
6969
may be beneficial too.
7070

@@ -86,7 +86,7 @@ may be beneficial too.
8686
create: true
8787
8888
89-
Define network interfaces ``etc/kayobe/inventory/group_vars/wazuh-manager/network-interfaces``:
89+
Define network interfaces ``$KAYOBE_CONFIG_PATH/inventory/group_vars/wazuh-manager/network-interfaces``:
9090

9191
(The following is an example - the names will depend on your particular network configuration.)
9292

@@ -98,7 +98,7 @@ Define network interfaces ``etc/kayobe/inventory/group_vars/wazuh-manager/networ
9898
9999
100100
The Wazuh manager may need to be exposed externally, in which case it may require another interface.
101-
This can be done as follows in ``etc/kayobe/inventory/group_vars/wazuh-manager/network-interfaces``,
101+
This can be done as follows in ``$KAYOBE_CONFIG_PATH/inventory/group_vars/wazuh-manager/network-interfaces``,
102102
with the network defined in ``networks.yml`` as usual.
103103

104104
.. code-block:: yaml
@@ -190,7 +190,7 @@ Deploying Wazuh Manager services
190190
Setup
191191
-----
192192

193-
To install a specific version modify the wazuh-ansible entry in ``etc/kayobe/ansible/requirements.yml``:
193+
To install a specific version modify the wazuh-ansible entry in ``$KAYOBE_CONFIG_PATH/ansible/requirements.yml``:
194194

195195
.. code-block:: yaml
196196
@@ -211,7 +211,7 @@ Edit the playbook and variables to your needs:
211211
Wazuh manager configuration
212212
---------------------------
213213

214-
Wazuh manager playbook is located in ``etc/kayobe/ansible/wazuh-manager.yml``.
214+
Wazuh manager playbook is located in ``$KAYOBE_CONFIG_PATH/ansible/wazuh-manager.yml``.
215215
Running this playbook will:
216216

217217
* generate certificates for wazuh-manager
@@ -221,7 +221,7 @@ Running this playbook will:
221221
* setup and deploy wazuh-dashboard on wazuh-manager vm
222222
* copy certificates over to wazuh-manager vm
223223

224-
Wazuh manager variables file is located in ``etc/kayobe/inventory/group_vars/wazuh-manager/wazuh-manager``.
224+
Wazuh manager variables file is located in ``$KAYOBE_CONFIG_PATH/inventory/group_vars/wazuh-manager/wazuh-manager``.
225225

226226
You may need to modify some of the variables, including:
227227

@@ -232,27 +232,27 @@ You may need to modify some of the variables, including:
232232

233233
If you are using multiple environments, and you need to customise Wazuh in
234234
each environment, create override files in an appropriate directory,
235-
for example ``etc/kayobe/environments/production/inventory/group_vars/``.
235+
for example ``$KAYOBE_CONFIG_PATH/environments/production/inventory/group_vars/``.
236236

237237
Files which values can be overridden (in the context of Wazuh):
238238

239-
- etc/kayobe/inventory/group_vars/wazuh/wazuh-manager/wazuh-manager
240-
- etc/kayobe/wazuh-manager.yml
241-
- etc/kayobe/inventory/group_vars/wazuh/wazuh-agent/wazuh-agent
239+
- $KAYOBE_CONFIG_PATH/inventory/group_vars/wazuh/wazuh-manager/wazuh-manager
240+
- $KAYOBE_CONFIG_PATH/wazuh-manager.yml
241+
- $KAYOBE_CONFIG_PATH/inventory/group_vars/wazuh/wazuh-agent/wazuh-agent
242242

243243
You'll need to run ``wazuh-manager.yml`` playbook again to apply customisation.
244244

245245
Secrets
246246
-------
247247

248248
Wazuh requires that secrets or passwords are set for itself and the services with which it communiticates.
249-
Wazuh secrets playbook is located in ``etc/kayobe/ansible/wazuh-secrets.yml``.
249+
Wazuh secrets playbook is located in ``$KAYOBE_CONFIG_PATH/ansible/wazuh-secrets.yml``.
250250
Running this playbook will generate and put pertinent security items into secrets
251251
vault file which will be placed in ``$KAYOBE_CONFIG_PATH/wazuh-secrets.yml``.
252252
If using environments it ends up in ``$KAYOBE_CONFIG_PATH/environments/<env_name>/wazuh-secrets.yml``
253253
Remember to encrypt!
254254

255-
Wazuh secrets template is located in ``etc/kayobe/ansible/templates/wazuh-secrets.yml.j2``.
255+
Wazuh secrets template is located in ``$KAYOBE_CONFIG_PATH/ansible/templates/wazuh-secrets.yml.j2``.
256256
It will be used by wazuh secrets playbook to generate wazuh secrets vault file.
257257

258258

@@ -380,7 +380,7 @@ Verification
380380
------------
381381

382382
The Wazuh portal should be accessible on port 443 of the Wazuh
383-
manager’s IPs (using HTTPS, with the root CA cert in ``etc/kayobe/ansible/wazuh/certificates/wazuh-certificates/root-ca.pem``).
383+
manager’s IPs (using HTTPS, with the root CA cert in ``$KAYOBE_CONFIG_PATH/ansible/wazuh/certificates/wazuh-certificates/root-ca.pem``).
384384
The first login should be as the admin user,
385385
with the opendistro_admin_password password in ``$KAYOBE_CONFIG_PATH/wazuh-secrets.yml``.
386386
This will create the necessary indices.
@@ -392,9 +392,9 @@ Logs are in ``/var/log/wazuh-indexer/wazuh.log``. There are also logs in the jou
392392
Wazuh agents
393393
============
394394

395-
Wazuh agent playbook is located in ``etc/kayobe/ansible/wazuh-agent.yml``.
395+
Wazuh agent playbook is located in ``$KAYOBE_CONFIG_PATH/ansible/wazuh-agent.yml``.
396396

397-
Wazuh agent variables file is located in ``etc/kayobe/inventory/group_vars/wazuh-agent/wazuh-agent``.
397+
Wazuh agent variables file is located in ``$KAYOBE_CONFIG_PATH/inventory/group_vars/wazuh-agent/wazuh-agent``.
398398

399399
You may need to modify some variables, including:
400400

doc/source/operations/ceph-management.rst

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -8,14 +8,14 @@ Working with Cephadm
88
This documentation provides guide for Ceph operations. For deploying Ceph,
99
please refer to :ref:`cephadm-kayobe` documentation.
1010

11-
cephadm configuration location
11+
Cephadm configuration location
1212
------------------------------
1313

1414
In kayobe-config repository, under ``etc/kayobe/cephadm.yml`` (or in a specific
1515
Kayobe environment when using multiple environment, e.g.
1616
``etc/kayobe/environments/<Environment Name>/cephadm.yml``)
1717

18-
StackHPC's cephadm Ansible collection relies on multiple inventory groups:
18+
StackHPC's Cephadm Ansible collection relies on multiple inventory groups:
1919

2020
- ``mons``
2121
- ``mgrs``
@@ -24,11 +24,11 @@ StackHPC's cephadm Ansible collection relies on multiple inventory groups:
2424

2525
Those groups are usually defined in ``etc/kayobe/inventory/groups``.
2626

27-
Running cephadm playbooks
27+
Running Cephadm playbooks
2828
-------------------------
2929

3030
In kayobe-config repository, under ``etc/kayobe/ansible`` there is a set of
31-
cephadm based playbooks utilising stackhpc.cephadm Ansible Galaxy collection.
31+
Cephadm based playbooks utilising stackhpc.cephadm Ansible Galaxy collection.
3232

3333
- ``cephadm.yml`` - runs the end to end process starting with deployment and
3434
defining EC profiles/crush rules/pools and users
@@ -176,11 +176,11 @@ Remove the OSD using Ceph orchestrator command:
176176
ceph orch osd rm <ID> --replace
177177
178178
After removing OSDs, if the drives the OSDs were deployed on once again become
179-
available, cephadm may automatically try to deploy more OSDs on these drives if
179+
available, Cephadm may automatically try to deploy more OSDs on these drives if
180180
they match an existing drivegroup spec.
181181
If this is not your desired action plan - it's best to modify the drivegroup
182182
spec before (``cephadm_osd_spec`` variable in ``etc/kayobe/cephadm.yml``).
183-
Either set ``unmanaged: true`` to stop cephadm from picking up new disks or
183+
Either set ``unmanaged: true`` to stop Cephadm from picking up new disks or
184184
modify it in some way that it no longer matches the drives you want to remove.
185185

186186
Host maintenance

doc/source/operations/control-plane-operation.rst

Lines changed: 6 additions & 41 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ Monitoring
2626
----------
2727

2828
* `Back up InfluxDB <https://docs.influxdata.com/influxdb/v1.8/administration/backup_and_restore/>`__
29-
* `Back up ElasticSearch <https://www.elastic.co/guide/en/elasticsearch/reference/current/backup-cluster-data.html>`__
29+
* `Back up OpenSearch <https://opensearch.org/docs/latest/tuning-your-cluster/availability-and-recovery/snapshots/snapshot-restore/>`__
3030
* `Back up Prometheus <https://prometheus.io/docs/prometheus/latest/querying/api/#snapshot>`__
3131

3232
Seed
@@ -42,8 +42,8 @@ Ansible control host
4242
Control Plane Monitoring
4343
========================
4444

45-
The control plane has been configured to collect logs centrally using the EFK
46-
stack (Elasticsearch, Fluentd and Kibana).
45+
The control plane has been configured to collect logs centrally using Fluentd,
46+
OpenSearch and OpenSearch Dashboards.
4747

4848
Telemetry monitoring of the control plane is performed by Prometheus. Metrics
4949
are collected by Prometheus exporters, which are either running on all hosts
@@ -227,7 +227,7 @@ Overview
227227
* Remove the node from maintenance mode in bifrost
228228
* Bifrost should automatically power on the node via IPMI
229229
* Check that all docker containers are running
230-
* Check Kibana for any messages with log level ERROR or equivalent
230+
* Check OpenSearch Dashboards for any messages with log level ERROR or equivalent
231231

232232
Controllers
233233
-----------
@@ -277,7 +277,7 @@ Stop all Docker containers:
277277

278278
.. code-block:: console
279279
280-
monitoring0# for i in `docker ps -q`; do docker stop $i; done
280+
monitoring0# for i in `docker ps -a`; do systemctl stop kolla-$i-container; done
281281
282282
Shut down the node:
283283

@@ -342,21 +342,6 @@ Host packages can be updated with:
342342
343343
See https://docs.openstack.org/kayobe/latest/administration/overcloud.html#updating-packages
344344

345-
Upgrading OpenStack Services
346-
----------------------------
347-
348-
* Update tags for the images in ``etc/kayobe/kolla-image-tags.yml``
349-
* Pull container images to overcloud hosts with ``kayobe overcloud container image pull``
350-
* Run ``kayobe overcloud service upgrade``
351-
352-
You can update the subset of containers or hosts by
353-
354-
.. code-block:: console
355-
356-
kayobe# kayobe overcloud service upgrade --kolla-tags <service> --limit <hostname> --kolla-limit <hostname>
357-
358-
For more information, see: https://docs.openstack.org/kayobe/latest/upgrading.html
359-
360345
Troubleshooting
361346
===============
362347

@@ -378,27 +363,7 @@ To boot an instance on a specific hypervisor
378363

379364
.. code-block:: console
380365
381-
openstack server create --flavor <flavour name>--network <network name> --key-name <key> --image <Image name> --os-compute-api-version 2.74 --host <hypervisor hostname> <vm name>
382-
383-
Cleanup Procedures
384-
==================
385-
386-
OpenStack services can sometimes fail to remove all resources correctly. This
387-
is the case with Magnum, which fails to clean up users in its domain after
388-
clusters are deleted. `A patch has been submitted to stable branches
389-
<https://review.opendev.org/#/q/Ibadd5b57fe175bb0b100266e2dbcc2e1ea4efcf9>`__.
390-
Until this fix becomes available, if Magnum is in use, administrators can
391-
perform the following cleanup procedure regularly:
392-
393-
.. code-block:: console
394-
395-
for user in $(openstack user list --domain magnum -f value -c Name | grep -v magnum_trustee_domain_admin); do
396-
if openstack coe cluster list -c uuid -f value | grep -q $(echo $user | sed 's/_[0-9a-f]*$//'); then
397-
echo "$user still in use, not deleting"
398-
else
399-
openstack user delete --domain magnum $user
400-
fi
401-
done
366+
openstack server create --flavor <flavour name> --network <network name> --key-name <key name> --image <image name> --os-compute-api-version 2.74 --host <hypervisor hostname> <vm name>
402367
403368
OpenSearch indexes retention
404369
=============================

doc/source/operations/customising-horizon.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -113,6 +113,6 @@ If the ``horizon`` container is restarting with the following error:
113113
/var/lib/kolla/venv/bin/python /var/lib/kolla/venv/bin/manage.py compress --force
114114
CommandError: An error occurred during rendering /var/lib/kolla/venv/lib/python3.6/site-packages/openstack_dashboard/templates/horizon/_scripts.html: Couldn't find any precompiler in COMPRESS_PRECOMPILERS setting for mimetype '\'text/javascript\''.
115115
116-
It can be resolved by dropping cached content with ``docker restart
117-
memcached``. Note this will log out users from Horizon, as Django sessions are
118-
stored in Memcached.
116+
It can be resolved by dropping cached content with ``systemctl restart
117+
kolla-memcached-container``. Note this will log out users from Horizon, as Django
118+
sessions are stored in Memcached.

0 commit comments

Comments
 (0)