Skip to content

Commit 0de8324

Browse files
authored
Merge branch 'stackhpc/yoga' into image_cloud_init_c9s
2 parents d29fb8d + 83a3c8f commit 0de8324

30 files changed

+2148
-63
lines changed

doc/source/configuration/monitoring.rst

Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -136,3 +136,39 @@ mgrs group and list them as the endpoints for prometheus. Additionally,
136136
depending on your configuration, you may need set the
137137
``kolla_enable_prometheus_ceph_mgr_exporter`` variable to ``true`` in order to
138138
enable the ceph mgr exporter.
139+
140+
OpenStack Capacity
141+
==================
142+
143+
OpenStack Capacity allows you to see how much space you have avaliable
144+
in your cloud. StackHPC Kayobe Config includes this exporter by default
145+
and it's necessary that some variables are set to allow deployment.
146+
147+
To successfully deploy OpenStack Capacity, you are required to specify
148+
the OpenStack application credentials in ``kayobe/secrets.yml`` as:
149+
150+
.. code-block:: yaml
151+
152+
secrets_os_exporter_auth_url: <some_auth_url>
153+
secrets_os_exporter_credential_id: <some_credential_id>
154+
secrets_os_exporter_credential_secret: <some_credential_secret>
155+
156+
After defining your credentials, You may deploy OpenStack Capacity
157+
using the ``ansible/deploy-os-capacity-exporter.yml`` Ansible playbook
158+
via Kayobe.
159+
160+
.. code-block:: console
161+
162+
kayobe playbook run ansible/deploy-os-capacity-exporter.yml
163+
164+
It is required that you re-configure the Prometheus, Grafana and HAProxy
165+
services following deployment, to do this run the following Kayobe command.
166+
167+
.. code-block:: console
168+
169+
kayobe overcloud service reconfigure -kt grafana,prometheus,haproxy
170+
171+
If you notice ``HaproxyServerDown`` or ``HaproxyBackendDown`` prometheus
172+
alerts after deployment it's likely the os_exporter secrets have not been
173+
set correctly, double check you have entered the correct authentication
174+
information appropiate to your cloud and re-deploy.

doc/source/configuration/release-train.rst

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -192,6 +192,16 @@ promoted to production:
192192
193193
kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/pulp-repo-promote-production.yml
194194
195+
Synchronising all Kolla container images can take a long time. A limited list
196+
of images can be synchronised using the ``stackhpc_pulp_images_kolla_filter``
197+
variable, which accepts a whitespace-separated list of regular expressions
198+
matching Kolla image names. Usage is similar to ``kolla-build`` CLI arguments.
199+
For example:
200+
201+
.. code-block:: console
202+
203+
kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/pulp-container-sync.yml -e stackhpc_pulp_images_kolla_filter='"^glance nova-compute$"'
204+
195205
Initial seed deployment
196206
-----------------------
197207

doc/source/configuration/wazuh.rst

Lines changed: 26 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -17,8 +17,8 @@ The short version
1717
#. Deploy the Wazuh agents: ``kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/wazuh-agent.yml``
1818

1919

20-
Wazuh Manager
21-
=============
20+
Wazuh Manager Host
21+
==================
2222

2323
Provision using infra-vms
2424
-------------------------
@@ -57,7 +57,9 @@ Define VM sizing in ``etc/kayobe/inventory/group_vars/wazuh-manager/infra-vms``:
5757
infra_vm_data_capacity: "200G"
5858
5959
60-
Optional: define LVM volumes ``etc/kayobe/inventory/group_vars/wazuh-manager/lvm``:
60+
Optional: define LVM volumes in ``etc/kayobe/inventory/group_vars/wazuh-manager/lvm``.
61+
``/var/ossec`` often requires greater storage space, and ``/var/lib/wazuh-indexer``
62+
may be beneficial too.
6163

6264
.. code-block:: console
6365
@@ -73,7 +75,7 @@ Optional: define LVM volumes ``etc/kayobe/inventory/group_vars/wazuh-manager/lvm
7375
size: "100%VG"
7476
filesystem: "ext4"
7577
mount: true
76-
mntp: /var/lib/elasticsearch”
78+
mntp: "/var/ossec"
7779
create: true
7880
7981
@@ -249,7 +251,7 @@ It will be used by wazuh secrets playbook to generate wazuh secrets vault file.
249251
.. code-block:: console
250252
251253
kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/wazuh-secrets.yml
252-
ansible-vault encrypt --vault-password-file ~/vault.pass $KAYOBE_CONFIG_PATH/inventory/group_vars/wazuh/wazuh-manager/wazuh-secrets
254+
ansible-vault encrypt --vault-password-file ~/vault.pass $KAYOBE_CONFIG_PATH/wazuh-secrets.yml
253255
254256
255257
TLS (optional)
@@ -288,6 +290,21 @@ Example OpenSSL rune to convert to PKCS#8:
288290

289291
TODO: document how to use a local certificate. Do we need to override all certificates?
290292

293+
Custom SCA Policies (optional)
294+
------------------------------
295+
296+
Wazuh ships with a large selection of Security Configuration Assessment
297+
rulesets. However, you may find you want to add more. This can be achieved via
298+
`custom policies <https://documentation.wazuh.com/current/user-manual/capabilities/sec-config-assessment/how-to-configure.html>`_.
299+
300+
SKC supports this automatically, just add the policy file from this PR to
301+
``{{ kayobe_env_config_path }}/wazuh/custom_sca_policies``.
302+
303+
Currently, Wazuh does not ship with a CIS benchmark for Rocky 9. You can find
304+
the in-development policy here: https://github.com/wazuh/wazuh/pull/17810 To
305+
include this in your deployment, simply copy it to
306+
``{{ kayobe_env_config_path }}/wazuh/custom_sca_policies/cis_rocky_linux_9.yml``.
307+
291308
Deploy
292309
------
293310

@@ -303,7 +320,7 @@ Encrypt the keys (and remember to commit to git):
303320
``ansible-vault encrypt --vault-password-file ~/vault.pass $KAYOBE_CONFIG_PATH/ansible/wazuh/certificates/certs/*.key``
304321

305322
Verification
306-
==============
323+
------------
307324

308325
The Wazuh portal should be accessible on port 443 of the Wazuh
309326
manager’s IPs (using HTTPS, with the root CA cert in ``etc/kayobe/ansible/wazuh/certificates/wazuh-certificates/root-ca.pem``).
@@ -315,11 +332,9 @@ Troubleshooting
315332

316333
Logs are in ``/var/log/wazuh-indexer/wazuh.log``. There are also logs in the journal.
317334

318-
============
319335
Wazuh agents
320336
============
321337

322-
323338
Wazuh agent playbook is located in ``etc/kayobe/ansible/wazuh-agent.yml``.
324339

325340
Wazuh agent variables file is located in ``etc/kayobe/inventory/group_vars/wazuh-agent/wazuh-agent``.
@@ -333,13 +348,13 @@ Deploy the Wazuh agents:
333348
``kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/wazuh-agent.yml``
334349

335350
Verification
336-
=============
351+
------------
337352

338353
The Wazuh agents should register with the Wazuh manager. This can be verified via the agents page in Wazuh Portal.
339354
Check CIS benchmark output in agent section.
340355

341-
Additional resources:
342-
=====================
356+
Additional resources
357+
--------------------
343358

344359
For times when you need to upgrade wazuh with elasticsearch to version with opensearch or you just need to deinstall all wazuh components:
345360
Wazuh purge script: https://github.com/stackhpc/wazuh-server-purge
Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
---
2+
- hosts: monitoring
3+
gather_facts: false
4+
5+
tasks:
6+
- name: Create os-capacity directory
7+
ansible.builtin.file:
8+
path: /opt/kayobe/os-capacity/
9+
state: directory
10+
11+
- name: Template clouds.yml
12+
ansible.builtin.template:
13+
src: templates/os_capacity-clouds.yml.j2
14+
dest: /opt/kayobe/os-capacity/clouds.yaml
15+
16+
- name: Ensure os_capacity container is running
17+
docker_container:
18+
name: os_capacity
19+
image: ghcr.io/stackhpc/os-capacity:master
20+
env:
21+
OS_CLOUD: openstack
22+
OS_CLIENT_CONFIG_FILE: /etc/openstack/clouds.yaml
23+
mounts:
24+
- type: bind
25+
source: /opt/kayobe/os-capacity/
26+
target: /etc/openstack/
27+
network_mode: host
28+
restart_policy: unless-stopped
29+
become: true
Lines changed: 69 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,69 @@
1+
---
2+
# Sometimes, typically after restarting OVN services, the priorities of entries
3+
# in the ha_chassis and gateway_chassis tables in the OVN northbound database
4+
# can become misaligned. This results in broken routing for external (bare
5+
# metal/SR-IOV) ports.
6+
7+
# This playbook can be used to fix the issue by realigning the priorities of
8+
# the table entries. It does so by assigning the highest priority to the
9+
# "first" (sorted alphabetically) OVN NB DB host. This results in all gateways
10+
# being scheduled to a single host, but is less complicated than trying to
11+
# balance them (and it's also not clear to me how to map between individual
12+
# ha_chassis and gateway_chassis entries).
13+
14+
# The playbook can be run as follows:
15+
# kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/ovn-fix-chassis-priorities.yml
16+
17+
# If the 'controllers' group does not align with the group used to deploy the
18+
# OVN NB DB, this can be overridden by passing the following:
19+
# '-e ovn_nb_db_group=some_other_group'
20+
21+
- name: Find OVN DB DB Leader
22+
hosts: "{{ ovn_nb_db_group | default('controllers') }}"
23+
tasks:
24+
- name: Find the OVN NB DB leader
25+
command: docker exec -it ovn_nb_db ovn-nbctl get-connection
26+
changed_when: false
27+
failed_when: false
28+
register: ovn_check_result
29+
check_mode: no
30+
31+
- name: Group hosts by leader/follower role
32+
group_by:
33+
key: "ovn_nb_{{ 'leader' if ovn_check_result.rc == 0 else 'follower' }}"
34+
changed_when: false
35+
36+
- name: Assert one leader exists
37+
assert:
38+
that:
39+
- groups['ovn_nb_leader'] | default([]) | length == 1
40+
41+
- name: Fix OVN chassis priorities
42+
hosts: ovn_nb_leader
43+
vars:
44+
ovn_nb_db_group: controllers
45+
ovn_nb_db_hosts_sorted: "{{ query('inventory_hostnames', ovn_nb_db_group) | sort | list }}"
46+
ha_chassis_max_priority: 32767
47+
gateway_chassis_max_priority: "{{ ovn_nb_db_hosts_sorted | length }}"
48+
tasks:
49+
- name: Fix ha_chassis priorities
50+
command: >-
51+
docker exec -it ovn_nb_db
52+
bash -c '
53+
ovn-nbctl find ha_chassis chassis_name={{ item }} |
54+
awk '\''$1 == "_uuid" { print $3 }'\'' |
55+
while read uuid; do ovn-nbctl set ha_chassis $uuid priority={{ priority }}; done'
56+
loop: "{{ ovn_nb_db_hosts_sorted }}"
57+
vars:
58+
priority: "{{ ha_chassis_max_priority | int - ovn_nb_db_hosts_sorted.index(item) }}"
59+
60+
- name: Fix gateway_chassis priorities
61+
command: >-
62+
docker exec -it ovn_nb_db
63+
bash -c '
64+
ovn-nbctl find gateway_chassis chassis_name={{ item }} |
65+
awk '\''$1 == "_uuid" { print $3 }'\'' |
66+
while read uuid; do ovn-nbctl set gateway_chassis $uuid priority={{ priority }}; done'
67+
loop: "{{ ovn_nb_db_hosts_sorted }}"
68+
vars:
69+
priority: "{{ gateway_chassis_max_priority | int - ovn_nb_db_hosts_sorted.index(item) }}"
Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
clouds:
2+
openstack:
3+
auth:
4+
auth_url: "{{ secrets_os_exporter_auth_url }}"
5+
application_credential_id: "{{ secrets_os_exporter_credential_id }}"
6+
application_credential_secret: "{{ secrets_os_exporter_credential_secret }}"
7+
region_name: "RegionOne"
8+
interface: "internal"
9+
identity_api_version: 3
10+
auth_type: "v3applicationcredential"

etc/kayobe/ansible/wazuh-manager.yml

Lines changed: 57 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,63 @@
1717
- role: "{{ playbook_dir }}/roles/wazuh-ansible/wazuh-ansible/roles/wazuh/ansible-filebeat-oss"
1818
- role: "{{ playbook_dir }}/roles/wazuh-ansible/wazuh-ansible/roles/wazuh/wazuh-dashboard"
1919
post_tasks:
20+
- block:
21+
- name: Check if custom SCA policies directory exists
22+
stat:
23+
path: "{{ local_custom_sca_policies_path }}"
24+
register: custom_sca_policies_folder
25+
delegate_to: localhost
26+
become: no
27+
28+
- name: Gather list of custom SCA policies
29+
find:
30+
paths: "{{ local_custom_sca_policies_path }}"
31+
patterns: '*.yml'
32+
delegate_to: localhost
33+
register: custom_sca_policies
34+
when: custom_sca_policies_folder.stat.exists
35+
36+
- name: Allow Wazuh agents to execute commands in SCA policies sent from the Wazuh manager
37+
blockinfile:
38+
path: "/var/ossec/etc/local_internal_options.conf"
39+
state: present
40+
owner: wazuh
41+
group: wazuh
42+
block: |
43+
sca.remote_commands=1
44+
when: custom_sca_policies.files | length > 0
45+
46+
- name: Copy custom SCA policy files to Wazuh manager
47+
copy:
48+
# Note the trailing slash to copy directory contents
49+
src: "{{ local_custom_sca_policies_path }}/"
50+
dest: "/var/ossec/etc/shared/default/"
51+
owner: wazuh
52+
group: wazuh
53+
when: custom_sca_policies.files | length > 0
54+
55+
- name: Add custom policy definition(s) to the shared Agent config
56+
blockinfile:
57+
path: "/var/ossec/etc/shared/default/agent.conf"
58+
state: present
59+
owner: wazuh
60+
group: wazuh
61+
marker: "{mark} ANSIBLE MANAGED BLOCK Custom SCA Policies"
62+
insertafter: "<!-- Shared agent configuration here -->"
63+
block: |
64+
{% filter indent(width=2, first=true) %}
65+
<sca>
66+
<policies>
67+
{% for item in custom_sca_policies.files %}
68+
<policy>etc/shared/{{ item.path | basename }}</policy>
69+
{% endfor %}
70+
</policies>
71+
</sca>
72+
{% endfilter %}
73+
when: custom_sca_policies.files | length > 0
74+
notify:
75+
- Restart wazuh
76+
2077
- name: Set http/s_proxy vars in ossec-init.conf for vulnerability detector
2178
blockinfile:
2279
path: "/var/ossec/etc/ossec.conf"

etc/kayobe/dnf.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -112,19 +112,19 @@ dnf_custom_repos_rocky:
112112
appstream:
113113
baseurl: "{{ stackhpc_repo_rocky_appstream_url }}"
114114
description: "Rocky Linux $releasever - AppStream"
115-
file: Rocky-AppStream
115+
file: "{{ 'Rocky-AppStream' if os_release == '8' else 'rocky' }}"
116116
gpgkey: file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial
117117
gpgcheck: yes
118118
baseos:
119119
baseurl: "{{ stackhpc_repo_rocky_baseos_url }}"
120120
description: "Rocky Linux $releasever - BaseOS"
121-
file: Rocky-BaseOS
121+
file: "{{ 'Rocky-BaseOS' if os_release == '8' else 'rocky' }}"
122122
gpgkey: file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial
123123
gpgcheck: yes
124124
extras:
125125
baseurl: "{{ stackhpc_repo_rocky_extras_url }}"
126126
description: "Rocky Linux $releasever - Extras"
127-
file: Rocky-Extras
127+
file: "{{ 'Rocky-Extras' if os_release == '8' else 'rocky-extras' }}"
128128
gpgkey: file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial
129129
gpgcheck: yes
130130

etc/kayobe/environments/ci-aio/automated-setup.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ cat << EOF | sudo tee -a /etc/hosts
66
10.205.3.187 pulp-server pulp-server.internal.sms-cloud
77
EOF
88

9-
if [ sudo vgdisplay | grep -q lvm2 ]; then
9+
if sudo vgdisplay | grep -q lvm2; then
1010
sudo lvextend -L 4G /dev/rootvg/lv_home -r || true
1111
sudo lvextend -L 4G /dev/rootvg/lv_tmp -r || true
1212
fi

etc/kayobe/inventory/group_vars/wazuh-manager/wazuh-manager

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,9 @@ local_certs_path: "{{ playbook_dir }}/wazuh/certificates"
2424
# Ansible control host custom certificates directory
2525
local_custom_certs_path: "{{ playbook_dir }}/wazuh/custom_certificates"
2626

27+
# Ansible custom SCA policies directory
28+
local_custom_sca_policies_path: "{{ kayobe_env_config_path }}/wazuh/custom_sca_policies"
29+
2730
# Indexer variables
2831
indexer_node_name: "{{ inventory_hostname }}"
2932

0 commit comments

Comments
 (0)