Skip to content

Commit 25a29c9

Browse files
authored
Merge branch 'stackhpc/xena' into bump-bifrost-xena-tag
2 parents bd6d2a2 + 94bd7cb commit 25a29c9

File tree

4 files changed

+86
-8
lines changed

4 files changed

+86
-8
lines changed

doc/source/configuration/release-train.rst

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -170,6 +170,16 @@ promoted to production:
170170
171171
kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/pulp-repo-promote-production.yml
172172
173+
Synchronising all Kolla container images can take a long time. A limited list
174+
of images can be synchronised using the ``stackhpc_pulp_images_kolla_filter``
175+
variable, which accepts a whitespace-separated list of regular expressions
176+
matching Kolla image names. Usage is similar to ``kolla-build`` CLI arguments.
177+
For example:
178+
179+
.. code-block:: console
180+
181+
kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/pulp-container-sync.yml -e stackhpc_pulp_images_kolla_filter='"^glance nova-compute$"'
182+
173183
Initial seed deployment
174184
-----------------------
175185

doc/source/configuration/wazuh.rst

Lines changed: 6 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,8 @@
22
Wazuh
33
=====
44

5-
Wazuh Manager
6-
=============
5+
Wazuh Manager Host
6+
==================
77

88
Provision using infra-vms
99
-------------------------
@@ -288,7 +288,7 @@ Encrypt the keys (and remember to commit to git):
288288
``ansible-vault encrypt --vault-password-file ~/vault.pass $KAYOBE_CONFIG_PATH/ansible/wazuh/certificates/certs/*.key``
289289

290290
Verification
291-
==============
291+
------------
292292

293293
The Wazuh portal should be accessible on port 443 of the Wazuh
294294
manager’s IPs (using HTTPS, with the root CA cert in ``etc/kayobe/ansible/wazuh/certificates/wazuh-certificates/root-ca.pem``).
@@ -300,11 +300,9 @@ Troubleshooting
300300

301301
Logs are in ``/var/log/wazuh-indexer/wazuh.log``. There are also logs in the journal.
302302

303-
============
304303
Wazuh agents
305304
============
306305

307-
308306
Wazuh agent playbook is located in ``etc/kayobe/ansible/wazuh-agent.yml``.
309307

310308
Wazuh agent variables file is located in ``etc/kayobe/inventory/group_vars/wazuh-agent/wazuh-agent``.
@@ -318,13 +316,13 @@ Deploy the Wazuh agents:
318316
``kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/wazuh-agent.yml``
319317

320318
Verification
321-
=============
319+
------------
322320

323321
The Wazuh agents should register with the Wazuh manager. This can be verified via the agents page in Wazuh Portal.
324322
Check CIS benchmark output in agent section.
325323

326-
Additional resources:
327-
=====================
324+
Additional resources
325+
--------------------
328326

329327
For times when you need to upgrade wazuh with elasticsearch to version with opensearch or you just need to deinstall all wazuh components:
330328
Wazuh purge script: https://github.com/stackhpc/wazuh-server-purge
Lines changed: 69 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,69 @@
1+
---
2+
# Sometimes, typically after restarting OVN services, the priorities of entries
3+
# in the ha_chassis and gateway_chassis tables in the OVN northbound database
4+
# can become misaligned. This results in broken routing for external (bare
5+
# metal/SR-IOV) ports.
6+
7+
# This playbook can be used to fix the issue by realigning the priorities of
8+
# the table entries. It does so by assigning the highest priority to the
9+
# "first" (sorted alphabetically) OVN NB DB host. This results in all gateways
10+
# being scheduled to a single host, but is less complicated than trying to
11+
# balance them (and it's also not clear to me how to map between individual
12+
# ha_chassis and gateway_chassis entries).
13+
14+
# The playbook can be run as follows:
15+
# kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/ovn-fix-chassis-priorities.yml
16+
17+
# If the 'controllers' group does not align with the group used to deploy the
18+
# OVN NB DB, this can be overridden by passing the following:
19+
# '-e ovn_nb_db_group=some_other_group'
20+
21+
- name: Find OVN DB DB Leader
22+
hosts: "{{ ovn_nb_db_group | default('controllers') }}"
23+
tasks:
24+
- name: Find the OVN NB DB leader
25+
command: docker exec -it ovn_nb_db ovn-nbctl get-connection
26+
changed_when: false
27+
failed_when: false
28+
register: ovn_check_result
29+
check_mode: no
30+
31+
- name: Group hosts by leader/follower role
32+
group_by:
33+
key: "ovn_nb_{{ 'leader' if ovn_check_result.rc == 0 else 'follower' }}"
34+
changed_when: false
35+
36+
- name: Assert one leader exists
37+
assert:
38+
that:
39+
- groups['ovn_nb_leader'] | default([]) | length == 1
40+
41+
- name: Fix OVN chassis priorities
42+
hosts: ovn_nb_leader
43+
vars:
44+
ovn_nb_db_group: controllers
45+
ovn_nb_db_hosts_sorted: "{{ query('inventory_hostnames', ovn_nb_db_group) | sort | list }}"
46+
ha_chassis_max_priority: 32767
47+
gateway_chassis_max_priority: "{{ ovn_nb_db_hosts_sorted | length }}"
48+
tasks:
49+
- name: Fix ha_chassis priorities
50+
command: >-
51+
docker exec -it ovn_nb_db
52+
bash -c '
53+
ovn-nbctl find ha_chassis chassis_name={{ item }} |
54+
awk '\''$1 == "_uuid" { print $3 }'\'' |
55+
while read uuid; do ovn-nbctl set ha_chassis $uuid priority={{ priority }}; done'
56+
loop: "{{ ovn_nb_db_hosts_sorted }}"
57+
vars:
58+
priority: "{{ ha_chassis_max_priority | int - ovn_nb_db_hosts_sorted.index(item) }}"
59+
60+
- name: Fix gateway_chassis priorities
61+
command: >-
62+
docker exec -it ovn_nb_db
63+
bash -c '
64+
ovn-nbctl find gateway_chassis chassis_name={{ item }} |
65+
awk '\''$1 == "_uuid" { print $3 }'\'' |
66+
while read uuid; do ovn-nbctl set gateway_chassis $uuid priority={{ priority }}; done'
67+
loop: "{{ ovn_nb_db_hosts_sorted }}"
68+
vars:
69+
priority: "{{ gateway_chassis_max_priority | int - ovn_nb_db_hosts_sorted.index(item) }}"

releasenotes/config.yaml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
11
---
22
# This needs to be updated to the latest release.
33
release_tag_re: stackhpc/11\.\d+\.\d+\.\d
4+
ignore_null_merges: false

0 commit comments

Comments
 (0)