Skip to content

Commit 2f60888

Browse files
committed
Merge stackhpc/xena into stackhpc/yoga
2 parents 13b7955 + e34f498 commit 2f60888

File tree

3 files changed

+85
-8
lines changed

3 files changed

+85
-8
lines changed

doc/source/configuration/release-train.rst

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -192,6 +192,16 @@ promoted to production:
192192
193193
kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/pulp-repo-promote-production.yml
194194
195+
Synchronising all Kolla container images can take a long time. A limited list
196+
of images can be synchronised using the ``stackhpc_pulp_images_kolla_filter``
197+
variable, which accepts a whitespace-separated list of regular expressions
198+
matching Kolla image names. Usage is similar to ``kolla-build`` CLI arguments.
199+
For example:
200+
201+
.. code-block:: console
202+
203+
kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/pulp-container-sync.yml -e stackhpc_pulp_images_kolla_filter='"^glance nova-compute$"'
204+
195205
Initial seed deployment
196206
-----------------------
197207

doc/source/configuration/wazuh.rst

Lines changed: 6 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -17,8 +17,8 @@ The short version
1717
#. Deploy the Wazuh agents: ``kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/wazuh-agent.yml``
1818

1919

20-
Wazuh Manager
21-
=============
20+
Wazuh Manager Host
21+
==================
2222

2323
Provision using infra-vms
2424
-------------------------
@@ -303,7 +303,7 @@ Encrypt the keys (and remember to commit to git):
303303
``ansible-vault encrypt --vault-password-file ~/vault.pass $KAYOBE_CONFIG_PATH/ansible/wazuh/certificates/certs/*.key``
304304

305305
Verification
306-
==============
306+
------------
307307

308308
The Wazuh portal should be accessible on port 443 of the Wazuh
309309
manager’s IPs (using HTTPS, with the root CA cert in ``etc/kayobe/ansible/wazuh/certificates/wazuh-certificates/root-ca.pem``).
@@ -315,11 +315,9 @@ Troubleshooting
315315

316316
Logs are in ``/var/log/wazuh-indexer/wazuh.log``. There are also logs in the journal.
317317

318-
============
319318
Wazuh agents
320319
============
321320

322-
323321
Wazuh agent playbook is located in ``etc/kayobe/ansible/wazuh-agent.yml``.
324322

325323
Wazuh agent variables file is located in ``etc/kayobe/inventory/group_vars/wazuh-agent/wazuh-agent``.
@@ -333,13 +331,13 @@ Deploy the Wazuh agents:
333331
``kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/wazuh-agent.yml``
334332

335333
Verification
336-
=============
334+
------------
337335

338336
The Wazuh agents should register with the Wazuh manager. This can be verified via the agents page in Wazuh Portal.
339337
Check CIS benchmark output in agent section.
340338

341-
Additional resources:
342-
=====================
339+
Additional resources
340+
--------------------
343341

344342
For times when you need to upgrade wazuh with elasticsearch to version with opensearch or you just need to deinstall all wazuh components:
345343
Wazuh purge script: https://github.com/stackhpc/wazuh-server-purge
Lines changed: 69 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,69 @@
1+
---
2+
# Sometimes, typically after restarting OVN services, the priorities of entries
3+
# in the ha_chassis and gateway_chassis tables in the OVN northbound database
4+
# can become misaligned. This results in broken routing for external (bare
5+
# metal/SR-IOV) ports.
6+
7+
# This playbook can be used to fix the issue by realigning the priorities of
8+
# the table entries. It does so by assigning the highest priority to the
9+
# "first" (sorted alphabetically) OVN NB DB host. This results in all gateways
10+
# being scheduled to a single host, but is less complicated than trying to
11+
# balance them (and it's also not clear to me how to map between individual
12+
# ha_chassis and gateway_chassis entries).
13+
14+
# The playbook can be run as follows:
15+
# kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/ovn-fix-chassis-priorities.yml
16+
17+
# If the 'controllers' group does not align with the group used to deploy the
18+
# OVN NB DB, this can be overridden by passing the following:
19+
# '-e ovn_nb_db_group=some_other_group'
20+
21+
- name: Find OVN DB DB Leader
22+
hosts: "{{ ovn_nb_db_group | default('controllers') }}"
23+
tasks:
24+
- name: Find the OVN NB DB leader
25+
command: docker exec -it ovn_nb_db ovn-nbctl get-connection
26+
changed_when: false
27+
failed_when: false
28+
register: ovn_check_result
29+
check_mode: no
30+
31+
- name: Group hosts by leader/follower role
32+
group_by:
33+
key: "ovn_nb_{{ 'leader' if ovn_check_result.rc == 0 else 'follower' }}"
34+
changed_when: false
35+
36+
- name: Assert one leader exists
37+
assert:
38+
that:
39+
- groups['ovn_nb_leader'] | default([]) | length == 1
40+
41+
- name: Fix OVN chassis priorities
42+
hosts: ovn_nb_leader
43+
vars:
44+
ovn_nb_db_group: controllers
45+
ovn_nb_db_hosts_sorted: "{{ query('inventory_hostnames', ovn_nb_db_group) | sort | list }}"
46+
ha_chassis_max_priority: 32767
47+
gateway_chassis_max_priority: "{{ ovn_nb_db_hosts_sorted | length }}"
48+
tasks:
49+
- name: Fix ha_chassis priorities
50+
command: >-
51+
docker exec -it ovn_nb_db
52+
bash -c '
53+
ovn-nbctl find ha_chassis chassis_name={{ item }} |
54+
awk '\''$1 == "_uuid" { print $3 }'\'' |
55+
while read uuid; do ovn-nbctl set ha_chassis $uuid priority={{ priority }}; done'
56+
loop: "{{ ovn_nb_db_hosts_sorted }}"
57+
vars:
58+
priority: "{{ ha_chassis_max_priority | int - ovn_nb_db_hosts_sorted.index(item) }}"
59+
60+
- name: Fix gateway_chassis priorities
61+
command: >-
62+
docker exec -it ovn_nb_db
63+
bash -c '
64+
ovn-nbctl find gateway_chassis chassis_name={{ item }} |
65+
awk '\''$1 == "_uuid" { print $3 }'\'' |
66+
while read uuid; do ovn-nbctl set gateway_chassis $uuid priority={{ priority }}; done'
67+
loop: "{{ ovn_nb_db_hosts_sorted }}"
68+
vars:
69+
priority: "{{ gateway_chassis_max_priority | int - ovn_nb_db_hosts_sorted.index(item) }}"

0 commit comments

Comments
 (0)