Skip to content

Commit e19f764

Browse files
authored
Merge pull request #765 from stackhpc/2023.1-zed-merge
2023.1: zed merge
2 parents 97c2042 + 551677b commit e19f764

20 files changed

+259
-40
lines changed

.github/workflows/overcloud-host-image-build.yml

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -11,10 +11,6 @@ on:
1111
description: Build Ubuntu 22.04 Jammy
1212
type: boolean
1313
default: true
14-
SMS:
15-
description: Push images to SMS
16-
type: boolean
17-
default: true
1814
secrets:
1915
KAYOBE_VAULT_PASSWORD:
2016
required: true
@@ -166,7 +162,7 @@ jobs:
166162
env:
167163
OS_APPLICATION_CREDENTIAL_ID: ${{ secrets.OS_APPLICATION_CREDENTIAL_ID }}
168164
OS_APPLICATION_CREDENTIAL_SECRET: ${{ secrets.OS_APPLICATION_CREDENTIAL_SECRET }}
169-
if: inputs.rocky9 && steps.build_rocky_9.outcome == 'success' && inputs.sms
165+
if: inputs.rocky9 && steps.build_rocky_9.outcome == 'success'
170166

171167
- name: Build an Ubuntu Jammy 22.04 overcloud host image
172168
id: build_ubuntu_jammy
@@ -210,7 +206,7 @@ jobs:
210206
env:
211207
OS_APPLICATION_CREDENTIAL_ID: ${{ secrets.OS_APPLICATION_CREDENTIAL_ID }}
212208
OS_APPLICATION_CREDENTIAL_SECRET: ${{ secrets.OS_APPLICATION_CREDENTIAL_SECRET }}
213-
if: inputs.ubuntu-jammy && steps.build_ubuntu_jammy.outcome == 'success' && inputs.sms
209+
if: inputs.ubuntu-jammy && steps.build_ubuntu_jammy.outcome == 'success'
214210

215211
- name: Upload updated images artifact
216212
uses: actions/upload-artifact@v3

doc/source/configuration/cephadm.rst

Lines changed: 138 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -308,6 +308,136 @@ should be used in the Kolla Manila configuration e.g.:
308308
309309
manila_cephfs_filesystem_name: manila-cephfs
310310
311+
RADOS Gateways
312+
--------------
313+
314+
RADOS Gateways (RGWs) are defined with the following:
315+
316+
.. code:: yaml
317+
318+
cephadm_radosgw_services:
319+
- id: myrgw
320+
count_per_host: 1
321+
spec:
322+
rgw_frontend_port: 8100
323+
324+
The port chosen must not conflict with any other processes running on the Ceph
325+
hosts. Port 8100 does not conflict with our default suite of services.
326+
327+
Ceph RGWs require additional configuration to:
328+
329+
* Support both S3 and Swift APIs.
330+
331+
* Authenticate user access via Keystone.
332+
333+
* Allow cross-project and public object access.
334+
335+
The set of commands below configure all of these.
336+
337+
.. code:: yaml
338+
339+
# Append the following to cephadm_commands_post:
340+
- "config set client.rgw rgw_content_length_compat true"
341+
- "config set client.rgw rgw_enable_apis 's3, swift, swift_auth, admin'"
342+
- "config set client.rgw rgw_enforce_swift_acls true"
343+
- "config set client.rgw rgw_keystone_accepted_admin_roles 'admin'"
344+
- "config set client.rgw rgw_keystone_accepted_roles 'member, Member, _member_, admin'"
345+
- "config set client.rgw rgw_keystone_admin_domain Default"
346+
- "config set client.rgw rgw_keystone_admin_password {{ secrets_ceph_rgw_keystone_password }}"
347+
- "config set client.rgw rgw_keystone_admin_project service"
348+
- "config set client.rgw rgw_keystone_admin_user 'ceph_rgw'"
349+
- "config set client.rgw rgw_keystone_api_version '3'"
350+
- "config set client.rgw rgw_keystone_token_cache_size '10000'"
351+
- "config set client.rgw rgw_keystone_url https://{{ kolla_internal_fqdn }}:5000"
352+
- "config set client.rgw rgw_keystone_verify_ssl false"
353+
- "config set client.rgw rgw_max_attr_name_len '1000'"
354+
- "config set client.rgw rgw_max_attr_size '1000'"
355+
- "config set client.rgw rgw_max_attrs_num_in_req '1000'"
356+
- "config set client.rgw rgw_s3_auth_use_keystone true"
357+
- "config set client.rgw rgw_swift_account_in_url true"
358+
- "config set client.rgw rgw_swift_versioning_enabled true"
359+
360+
As we have configured Ceph to respond to Swift APIs, you will need to tell
361+
Kolla to account for this when registering Swift endpoints with Keystone. Also,
362+
when ``rgw_swift_account_in_url`` is set, the equivalent Kolla variable should
363+
be set in Kolla ``globals.yml`` too:
364+
365+
.. code:: yaml
366+
367+
ceph_rgw_swift_compatibility: false
368+
ceph_rgw_swift_account_in_url: true
369+
370+
``secrets_ceph_rgw_keystone_password`` should be stored in the Kayobe
371+
``secrets.yml``, and set to the same value as ``ceph_rgw_keystone_password`` in
372+
the Kolla ``passwords.yml``. As such, you will need to configure Keystone
373+
before deploying the RADOS gateways. If you are using the Kolla load balancer
374+
(see :ref:`RGWs-with-hyper-converged-Ceph` for more info), you can specify the
375+
``haproxy`` and ``loadbalancer`` tags here too.
376+
377+
.. code:: yaml
378+
379+
kayobe overcloud service deploy -kt ceph-rgw,keystone,haproxy,loadbalancer
380+
381+
382+
.. _RGWs-with-hyper-converged-Ceph:
383+
384+
RGWs with hyper-converged Ceph
385+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
386+
387+
If you are using a hyper-converged Ceph setup (i.e. your OpenStack controllers
388+
and Ceph storage nodes share the same hosts), you should double-check that
389+
``rgw_frontend_port`` does not conflict with any processes on the controllers.
390+
For example, port 80 (and 443) will be bound to the Kolla-deployed haproxy. You
391+
should choose a custom port that does not conflict with any OpenStack endpoints
392+
too (``openstack endpoint list``).
393+
394+
You may also want to use the Kolla-deployed haproxy to load balance your RGWs.
395+
This means you will not need to define any Ceph ingress services. Instead, you
396+
add definitions of your Ceph hosts to Kolla ``globals.yml``:
397+
398+
.. code:: yaml
399+
400+
ceph_rgw_hosts:
401+
- host: controller1
402+
ip: <host IP on storage net>
403+
port: 8100
404+
- host: controller2
405+
ip: <host IP on storage net>
406+
port: 8100
407+
- host: controller3
408+
ip: <host IP on storage net>
409+
port: 8100
410+
411+
HA with Ingress services
412+
~~~~~~~~~~~~~~~~~~~~~~~~
413+
414+
Ingress services are defined with the following. ``id`` should match the name
415+
(not id) of the RGW service to which ingress will point to. ``spec`` is a
416+
service specification required by Cephadm to deploy the ingress (haproxy +
417+
keepalived pair).
418+
419+
Note that the ``virtual_ip`` here must be different than the Kolla VIP. The
420+
choice of subnet will be dependent on your deployment, and can be outside
421+
of any Ceph networks.
422+
423+
.. code:: yaml
424+
425+
cephadm_ingress_services:
426+
- id: rgw.myrgw
427+
spec:
428+
frontend_port: 443
429+
monitor_port: 1967
430+
virtual_ip: 10.66.0.1/24
431+
ssl_cert: {example_certificate_chain}
432+
433+
When using ingress services, you will need to stop Kolla from configuring your
434+
RGWs to use the Kolla-deployed haproxy. Set the following in Kolla
435+
``globals.yml``:
436+
437+
.. code:: yaml
438+
439+
enable_ceph_rgw_loadbalancer: false
440+
311441
Deployment
312442
==========
313443

@@ -345,8 +475,14 @@ cephadm.yml playbook to perform post-deployment configuration:
345475
346476
kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/cephadm.yml
347477
348-
The ``cephadm.yml`` playbook imports various other playbooks, which may
349-
also be run individually to perform specific tasks.
478+
The ``cephadm.yml`` playbook imports various other playbooks, which may also be
479+
run individually to perform specific tasks. Note that if you want to deploy
480+
additional services (such as RGWs or ingress) after an initial deployment, you
481+
will need to set ``cephadm_bootstrap`` to true. For example:
482+
483+
.. code:: bash
484+
485+
kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/cephadm.yml -e cephadm_bootstrap=true
350486
351487
Configuration generation
352488
------------------------

etc/kayobe/ansible/ovn-fix-chassis-priorities.yml

Lines changed: 12 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -22,19 +22,19 @@
2222
hosts: "{{ ovn_nb_db_group | default('controllers') }}"
2323
tasks:
2424
- name: Find the OVN NB DB leader
25-
command: docker exec -it ovn_nb_db ovn-nbctl get-connection
25+
ansible.builtin.command: docker exec ovn_nb_db ovn-nbctl get-connection
2626
changed_when: false
2727
failed_when: false
2828
register: ovn_check_result
29-
check_mode: no
29+
check_mode: false
3030

3131
- name: Group hosts by leader/follower role
32-
group_by:
32+
ansible.builtin.group_by:
3333
key: "ovn_nb_{{ 'leader' if ovn_check_result.rc == 0 else 'follower' }}"
3434
changed_when: false
3535

3636
- name: Assert one leader exists
37-
assert:
37+
ansible.builtin.assert:
3838
that:
3939
- groups['ovn_nb_leader'] | default([]) | length == 1
4040

@@ -47,23 +47,27 @@
4747
gateway_chassis_max_priority: "{{ ovn_nb_db_hosts_sorted | length }}"
4848
tasks:
4949
- name: Fix ha_chassis priorities
50-
command: >-
51-
docker exec -it ovn_nb_db
50+
ansible.builtin.command: >-
51+
docker exec ovn_nb_db
5252
bash -c '
5353
ovn-nbctl find ha_chassis chassis_name={{ item }} |
5454
awk '\''$1 == "_uuid" { print $3 }'\'' |
5555
while read uuid; do ovn-nbctl set ha_chassis $uuid priority={{ priority }}; done'
5656
loop: "{{ ovn_nb_db_hosts_sorted }}"
5757
vars:
5858
priority: "{{ ha_chassis_max_priority | int - ovn_nb_db_hosts_sorted.index(item) }}"
59+
register: ha_chassis_command
60+
changed_when: ha_chassis_command.rc == 0
5961

6062
- name: Fix gateway_chassis priorities
61-
command: >-
62-
docker exec -it ovn_nb_db
63+
ansible.builtin.command: >-
64+
docker exec ovn_nb_db
6365
bash -c '
6466
ovn-nbctl find gateway_chassis chassis_name={{ item }} |
6567
awk '\''$1 == "_uuid" { print $3 }'\'' |
6668
while read uuid; do ovn-nbctl set gateway_chassis $uuid priority={{ priority }}; done'
6769
loop: "{{ ovn_nb_db_hosts_sorted }}"
6870
vars:
6971
priority: "{{ gateway_chassis_max_priority | int - ovn_nb_db_hosts_sorted.index(item) }}"
72+
register: gateway_chassis_command
73+
changed_when: gateway_chassis_command.rc == 0

etc/kayobe/ansible/requirements.yml

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,10 @@
22
collections:
33
- name: stackhpc.cephadm
44
version: 1.14.0
5+
# NOTE: Pinning pulp.squeezer to 0.0.13 because 0.0.14+ depends on the
6+
# pulp_glue Python library being installed.
7+
- name: pulp.squeezer
8+
version: 0.0.13
59
- name: stackhpc.pulp
610
version: 0.5.2
711
- name: stackhpc.hashicorp

etc/kayobe/ansible/wazuh-agent.yml

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,3 +5,35 @@
55
tasks:
66
- import_role:
77
name: "wazuh-ansible/wazuh-ansible/roles/wazuh/ansible-wazuh-agent"
8+
post_tasks:
9+
- name: Check if custom SCA policies directory exists
10+
stat:
11+
path: "{{ local_custom_sca_policies_path }}"
12+
register: custom_sca_policies_folder
13+
delegate_to: localhost
14+
15+
- name: Gather list of custom SCA policies
16+
find:
17+
paths: "{{ local_custom_sca_policies_path }}"
18+
patterns: '*.yml'
19+
delegate_to: localhost
20+
register: custom_sca_policies
21+
when: custom_sca_policies_folder.stat.exists
22+
23+
- name: Allow Wazuh agents to execute commands in SCA policies sent from the Wazuh manager
24+
become: yes
25+
blockinfile:
26+
path: "/var/ossec/etc/local_internal_options.conf"
27+
state: present
28+
owner: wazuh
29+
group: wazuh
30+
block: sca.remote_commands=1
31+
when: custom_sca_policies.files | length > 0
32+
notify:
33+
- Restart wazuh-agent
34+
35+
handlers:
36+
- name: Restart wazuh-agent
37+
service:
38+
name: wazuh-agent
39+
state: restarted

etc/kayobe/ansible/wazuh-manager.yml

Lines changed: 1 addition & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -65,16 +65,7 @@
6565
delegate_to: localhost
6666
register: custom_sca_policies
6767
when: custom_sca_policies_folder.stat.exists
68-
69-
- name: Allow Wazuh agents to execute commands in SCA policies sent from the Wazuh manager
70-
blockinfile:
71-
path: "/var/ossec/etc/local_internal_options.conf"
72-
state: present
73-
owner: wazuh
74-
group: wazuh
75-
block: |
76-
sca.remote_commands=1
77-
when: custom_sca_policies.files | length > 0
68+
become: no
7869

7970
- name: Copy custom SCA policy files to Wazuh manager
8071
copy:
@@ -124,7 +115,6 @@
124115
- name: Perform health check against filebeat
125116
command: filebeat test output
126117
changed_when: false
127-
become: true
128118
retries: 2
129119

130120
handlers:

etc/kayobe/bifrost.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,11 +5,11 @@
55
# Bifrost installation.
66

77
# URL of Bifrost source code repository.
8-
kolla_bifrost_source_url: "{{ stackhpc_bifrost_source_url }}"
8+
#kolla_bifrost_source_url:
99

1010
# Version (branch, tag, etc.) of Bifrost source code repository. Default is
1111
# {{ openstack_branch }}.
12-
kolla_bifrost_source_version: "{{ stackhpc_bifrost_source_version }}"
12+
#kolla_bifrost_source_version:
1313

1414
# Whether Bifrost uses firewalld. Default value is false to avoid conflicting
1515
# with iptables rules configured on the seed host by Kayobe.

etc/kayobe/cephadm.yml

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -86,8 +86,14 @@ cephadm_cluster_network: "{{ storage_mgmt_net_name | net_cidr }}"
8686
# stackhpc.cephadm.commands for format. Pre commands run before the rest of the
8787
# post-deployment configuration, post commands run after the rest of the
8888
# post-deployment configuration.
89-
#cephadm_commands_pre:
90-
#cephadm_commands_post:
89+
cephadm_commands_pre: "{{ cephadm_commands_pre_default + cephadm_commands_pre_extra }}"
90+
cephadm_commands_post: "{{ cephadm_commands_post_default + cephadm_commands_post_extra }}"
91+
92+
cephadm_commands_pre_default: []
93+
cephadm_commands_pre_extra: []
94+
95+
cephadm_commands_post_default: "{{ ['mgr module enable prometheus'] if kolla_enable_prometheus_ceph_mgr_exporter | bool else [] }}"
96+
cephadm_commands_post_extra: []
9197

9298
###############################################################################
9399
# Kolla Ceph auto-configuration.
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
---
2+
# Ansible custom SCA policies directory
3+
local_custom_sca_policies_path: "{{ kayobe_env_config_path }}/wazuh/custom_sca_policies"

etc/kayobe/inventory/group_vars/wazuh-manager/wazuh-manager

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -21,9 +21,6 @@ indexer_node_master: true
2121
# Ansible control host certificate directory
2222
local_certs_path: "{{ kayobe_env_config_path }}/wazuh"
2323

24-
# Ansible custom SCA policies directory
25-
local_custom_sca_policies_path: "{{ kayobe_env_config_path }}/wazuh/custom_sca_policies"
26-
2724
# Indexer variables
2825
indexer_node_name: "{{ inventory_hostname }}"
2926

0 commit comments

Comments
 (0)