Skip to content

Commit 526261a

Browse files
authored
Merge pull request #823 from stackhpc/2023.1-zed-merge
2023.1: zed merge
2 parents a1b1ad8 + 4b261c3 commit 526261a

23 files changed

+552
-108
lines changed

.github/workflows/stackhpc-container-image-build.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -92,6 +92,7 @@ jobs:
9292
timeout-minutes: 720
9393
permissions: {}
9494
strategy:
95+
fail-fast: false
9596
matrix: ${{ fromJson(needs.generate-tag.outputs.matrix) }}
9697
needs:
9798
- generate-tag

doc/source/configuration/index.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,3 +18,4 @@ the various features provided.
1818
wazuh
1919
vault
2020
magnum-capi
21+
security-hardening

doc/source/configuration/release-train.rst

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -101,6 +101,23 @@ default apt repositories. This can be done on a host-by host basis by defining
101101
the variables as host or group vars under ``etc/kayobe/inventory/host_vars`` or
102102
``etc/kayobe/inventory/group_vars``.
103103

104+
For Ubuntu-based deployments, Pulp currently `lacks support
105+
<https://github.com/pulp/pulp_deb/issues/419>`_ for certain types of content,
106+
including i18n files and command-not-found indices. This breaks APT when the
107+
``command-not-found`` package is installed:
108+
109+
.. code:: console
110+
111+
E: Failed to fetch https://pulp.example.com/pulp/content/ubuntu/jammy-security/development/dists/jammy-security/main/cnf/Commands-amd64 404 Not Found
112+
113+
The ``purge-command-not-found.yml`` custom playbook can be used to uninstall
114+
the package, prior to running any other APT commands. It may be installed as a
115+
:kayobe-doc:`pre-hook <custom-ansible-playbooks.html#hooks>` to the ``host
116+
configure`` commands. Note that if used as a hook, this playbook matches all
117+
hosts, so will run against the seed, even when running ``overcloud host
118+
configure``. Depending on the stage of deployment, some hosts may be
119+
unreachable.
120+
104121
For Rocky Linux based systems, package manager configuration is provided by
105122
``stackhpc_dnf_repos`` in ``etc/kayobe/dnf.yml``, which points to package
106123
repositories on the local Pulp server. To use this configuration, the
Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
==================
2+
Security Hardening
3+
==================
4+
5+
CIS Benchmark Hardening
6+
-----------------------
7+
8+
The roles from the `Ansible-Lockdown <https://github.com/ansible-lockdown>`_
9+
project are used to harden hosts in accordance with the CIS benchmark criteria.
10+
It won't get your benchmark score to 100%, but should provide a significant
11+
improvement over an unhardened system. A typical score would be 70%.
12+
13+
The following operating systems are supported:
14+
15+
- Ubuntu 22.04
16+
- Rocky 9
17+
18+
Configuration
19+
--------------
20+
21+
Some overrides to the role defaults are provided in
22+
``$KAYOBE_CONFIG_PATH/inventory/group_vars/overcloud/cis``. These may not be
23+
suitable for all deployments and so some fine tuning may be required. For
24+
instance, you may want different rules on a network node compared to a
25+
controller. It is best to consult the upstream role documentation for details
26+
about what each variable does. The documentation can be found here:
27+
28+
- `Ubuntu 22.04 <https://github.com/ansible-lockdown/UBUNTU22-CIS>`__
29+
- `Rocky 9 <https://github.com/ansible-lockdown/RHEL9-CIS>`__
30+
31+
Running the playbooks
32+
---------------------
33+
34+
As there is potential for unintended side effects when applying the hardening
35+
playbooks, the playbooks are not currently enabled by default. It is recommended
36+
that they are first applied to a representative staging environment to determine
37+
whether or not workloads or API requests are affected by any configuration changes.
38+
39+
.. code-block:: console
40+
41+
kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/cis.yml
42+

etc/kayobe/ansible/cis.yml

Lines changed: 13 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -4,12 +4,18 @@
44
hosts: overcloud
55
become: true
66
tasks:
7-
- name: Remove /etc/motd
8-
# See remediation in:
9-
# https://github.com/wazuh/wazuh/blob/bfa4efcf11e288c0a8809dc0b45fdce42fab8e0d/ruleset/sca/centos/8/cis_centos8_linux.yml#L777
10-
file:
11-
path: /etc/motd
12-
state: absent
7+
- name: Ensure the cron package is installed on ubuntu
8+
package:
9+
name: cron
10+
state: present
11+
when: ansible_facts.distribution == 'Ubuntu'
1312

1413
- include_role:
15-
name: ansible-lockdown.rhel8_cis
14+
name: ansible-lockdown.rhel9_cis
15+
when: ansible_facts.os_family == 'RedHat' and ansible_facts.distribution_major_version == '9'
16+
tags: always
17+
18+
- include_role:
19+
name: ansible-lockdown.ubuntu22_cis
20+
when: ansible_facts.distribution == 'Ubuntu' and ansible_facts.distribution_major_version == '22'
21+
tags: always

etc/kayobe/ansible/nova-compute-disable.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@
1111
- name: Set up openstack cli virtualenv
1212
pip:
1313
virtualenv: "{{ venv }}"
14+
virtualenv_command: "/usr/bin/python3 -m venv"
1415
name:
1516
- python-openstackclient
1617
state: latest

etc/kayobe/ansible/nova-compute-drain.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@
1111
- name: Set up openstack cli virtualenv
1212
pip:
1313
virtualenv: "{{ venv }}"
14+
virtualenv_command: "/usr/bin/python3 -m venv"
1415
name:
1516
- python-openstackclient
1617
state: latest

etc/kayobe/ansible/nova-compute-enable.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@
1111
- name: Set up openstack cli virtualenv
1212
pip:
1313
virtualenv: "{{ venv }}"
14+
virtualenv_command: "/usr/bin/python3 -m venv"
1415
name:
1516
- python-openstackclient
1617
state: latest

etc/kayobe/ansible/rabbitmq-reset.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
# Reset a broken RabbitMQ cluster.
3-
# Also restarts OpenStack services which may be broken.
3+
# Also restarts all OpenStack services using RabbitMQ.
44

55
- name: Reset RabbitMQ
66
hosts: controllers
@@ -65,7 +65,7 @@
6565
tags:
6666
- restart-openstack
6767
tasks:
68-
# The following services can have problems if the cluster gets broken.
68+
# The following services use RabbitMQ.
6969
- name: Restart OpenStack services
7070
shell: >-
71-
systemctl -a | egrep '(cinder|heat|ironic|keystone|magnum|neutron|nova)' | awk '{ print $1 }' | xargs systemctl restart
71+
systemctl -a | egrep '(barbican|blazar|cinder|cloudkitty|designate|heat|ironic|keystone|magnum|manila|neutron|nova|octavia)' | awk '{ print $1 }' | xargs systemctl restart

etc/kayobe/ansible/rekey-hosts.yml

Lines changed: 117 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,117 @@
1+
---
2+
# Playbook to rotate SSH keys across the cloud. By default it will rotate the
3+
# standard keys used by kayobe/kolla-ansible, but it can be configured for any
4+
# keys.
5+
6+
- name: Rekey hosts
7+
hosts: overcloud,seed,seed-hypervisor,infra-vms
8+
gather_facts: false
9+
vars:
10+
# The existing key is the key that is currently used to access overcloud hosts
11+
existing_private_key_path: "{{ ssh_private_key_path }}"
12+
existing_public_key_path: "{{ ssh_public_key_path }}"
13+
# The new key is the key that will be generated by this playbook
14+
new_private_key_path: "{{ ssh_private_key_path }}"
15+
new_public_key_path: "{{ ssh_public_key_path }}"
16+
new_key_type: "{{ ssh_key_type }}"
17+
# The existing key will locally be moved to deprecated_key_path once it is replaced
18+
deprecated_key_path: ~/old_ssh_key
19+
rekey_users:
20+
- stack
21+
- kolla
22+
rekey_remove_existing_key: false
23+
tasks:
24+
- name: Stat existing private key file
25+
ansible.builtin.stat:
26+
path: "{{ existing_private_key_path }}"
27+
register: stat_result
28+
delegate_to: localhost
29+
run_once: true
30+
31+
- name: Fail when existing private key does not exist
32+
ansible.builtin.fail:
33+
msg: "No existing private key file found. Check existing_private_key_path is set correctly."
34+
when:
35+
- not stat_result.stat.exists
36+
delegate_to: localhost
37+
run_once: true
38+
39+
- name: Stat existing public key file
40+
ansible.builtin.stat:
41+
path: "{{ existing_public_key_path }}"
42+
register: stat_result
43+
delegate_to: localhost
44+
run_once: true
45+
46+
- name: Fail when existing public key does not exist
47+
ansible.builtin.fail:
48+
msg: "No existing public key file found. Check existing_public_key_path is set correctly."
49+
when:
50+
- not stat_result.stat.exists
51+
delegate_to: localhost
52+
run_once: true
53+
54+
- name: Generate a new SSH key
55+
community.crypto.openssh_keypair:
56+
path: "{{ existing_private_key_path }}_new"
57+
type: "{{ new_key_type }}"
58+
delegate_to: localhost
59+
run_once: true
60+
61+
- name: Set new authorized keys
62+
vars:
63+
lookup_path: "{{ existing_private_key_path }}_new.pub"
64+
ansible.posix.authorized_key:
65+
user: "{{ item }}"
66+
state: present
67+
key: "{{ lookup('file', lookup_path) }}"
68+
loop: "{{ rekey_users }}"
69+
become: true
70+
71+
- name: Locally deprecate existing key (private)
72+
command: "mv {{ existing_private_key_path }} {{ deprecated_key_path }}"
73+
delegate_to: localhost
74+
run_once: true
75+
76+
- name: Locally deprecate existing key (public)
77+
command: "mv {{ existing_public_key_path }} {{ deprecated_key_path }}.pub"
78+
delegate_to: localhost
79+
run_once: true
80+
81+
- name: Locally promote new key (private)
82+
command: "mv {{ existing_private_key_path }}_new {{ new_private_key_path }}"
83+
delegate_to: localhost
84+
run_once: true
85+
86+
- name: Locally promote new key (public)
87+
command: "mv {{ existing_private_key_path }}_new.pub {{ new_public_key_path }}"
88+
delegate_to: localhost
89+
run_once: true
90+
91+
- block:
92+
- name: Stat old key file
93+
ansible.builtin.stat:
94+
path: "{{ deprecated_key_path }}.pub"
95+
register: stat_result
96+
delegate_to: localhost
97+
run_once: true
98+
99+
- name: Fail when deprecated public key does not exist
100+
ansible.builtin.fail:
101+
msg: "No deprecated public key file found. Check deprecated_key_path is set correctly."
102+
when:
103+
- not stat_result.stat.exists
104+
delegate_to: localhost
105+
run_once: true
106+
107+
- name: Remove old key from hosts
108+
vars:
109+
lookup_path: "{{ deprecated_key_path }}.pub"
110+
ansible.posix.authorized_key:
111+
user: "{{ item }}"
112+
state: absent
113+
key: "{{ lookup('file', lookup_path) }}"
114+
loop: "{{ rekey_users }}"
115+
become: true
116+
tags: remove-key
117+
when: rekey_remove_existing_key | bool

etc/kayobe/ansible/requirements.yml

Lines changed: 10 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -12,9 +12,16 @@ collections:
1212
version: 2.4.0
1313
roles:
1414
- src: stackhpc.vxlan
15-
- name: ansible-lockdown.rhel8_cis
16-
src: https://github.com/ansible-lockdown/RHEL8-CIS
17-
version: 1.3.0
15+
- name: ansible-lockdown.ubuntu22_cis
16+
src: https://github.com/stackhpc/UBUNTU22-CIS
17+
#FIXME: Waiting for https://github.com/ansible-lockdown/UBUNTU22-CIS/pull/174
18+
# to be in a tagged release
19+
version: bugfix/inject-facts
20+
- name: ansible-lockdown.rhel9_cis
21+
src: https://github.com/stackhpc/RHEL9-CIS
22+
#FIXME: Waiting for https://github.com/ansible-lockdown/RHEL9-CIS/pull/115
23+
# to be in a tagged release.
24+
version: bugfix/inject-facts
1825
- name: wazuh-ansible
1926
src: https://github.com/stackhpc/wazuh-ansible
2027
version: stackhpc

etc/kayobe/ansible/wazuh-agent.yml

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,9 @@
2828
owner: wazuh
2929
group: wazuh
3030
block: sca.remote_commands=1
31-
when: custom_sca_policies.files | length > 0
31+
when:
32+
- custom_sca_policies_folder.stat.exists
33+
- custom_sca_policies.files | length > 0
3234
notify:
3335
- Restart wazuh-agent
3436

0 commit comments

Comments
 (0)