Skip to content

2023.1: Bump Ceph collection, add Ceph maintenance playbooks #1219

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Aug 20, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 4 additions & 3 deletions doc/source/operations/upgrading-ceph.rst
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ Place the host or batch of hosts into maintenance mode:

.. code-block:: console

sudo cephadm shell -- ceph orch host maintenance enter <host>
kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/ceph-enter-maintenance.yml -l <host>

To update all eligible packages, use ``*``, escaping if necessary:

Expand All @@ -72,7 +72,8 @@ To update all eligible packages, use ``*``, escaping if necessary:
kayobe overcloud host package update --packages "*" --limit <host>

If the kernel has been upgraded, reboot the host or batch of hosts to pick up
the change:
the change. While running this playbook, consider setting ``ANSIBLE_SERIAL`` to
the maximum number of hosts that can safely reboot concurrently.

.. code-block:: console

Expand All @@ -82,7 +83,7 @@ Remove the host or batch of hosts from maintenance mode:

.. code-block:: console

sudo cephadm shell -- ceph orch host maintenance exit <host>
kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/ceph-exit-maintenance.yml -l <host>

Wait for Ceph health to return to ``HEALTH_OK``:

Expand Down
13 changes: 13 additions & 0 deletions etc/kayobe/ansible/ceph-enter-maintenance.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
---
- name: Ensure a Ceph host has entered maintenance
gather_facts: true
any_errors_fatal: true
# We need to check whether it is OK to stop hosts after previous hosts have
# entered maintenance.
serial: 1
hosts: ceph
become: true
tasks:
- name: Ensure a Ceph host has entered maintenance
ansible.builtin.import_role:
name: stackhpc.cephadm.enter_maintenance
12 changes: 12 additions & 0 deletions etc/kayobe/ansible/ceph-exit-maintenance.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
---
- name: Ensure a Ceph host has exited maintenance
gather_facts: true
any_errors_fatal: true
hosts: ceph
# The role currently requires hosts to exit maintenance serially.
serial: 1
become: true
tasks:
- name: Ensure a Ceph host has exited maintenance
ansible.builtin.import_role:
name: stackhpc.cephadm.exit_maintenance
2 changes: 1 addition & 1 deletion etc/kayobe/ansible/requirements.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
collections:
- name: stackhpc.cephadm
version: 1.15.1
version: 1.18.0
# NOTE: Pinning pulp.squeezer to 0.0.13 because 0.0.14+ depends on the
# pulp_glue Python library being installed.
- name: pulp.squeezer
Expand Down
15 changes: 15 additions & 0 deletions releasenotes/notes/ceph-maintenance-4c4eb0a4f7149665.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
---
features:
- |
Adds two new custom playbooks for placing Ceph hosts into and removing them
from maintenance:

- ``ceph-enter-maintenance.yml``
- ``ceph-exit-maintenance.yml``
upgrade:
- |
Updates the ``stackhpc.cephadm`` collection to version ``1.18.0``.
fixes:
- |
Fixes an issue with idempotency in the ``stackhpc.ceph.cephadm_keys``
plugin.
Loading