Skip to content

Commit 0187d2b

Browse files
authored
Merge pull request #43 from stackhpc/more_ceph_updates
2 parents 5ccb185 + f420e47 commit 0187d2b

File tree

3 files changed

+42
-8
lines changed

3 files changed

+42
-8
lines changed

source/ceph_storage.rst

Lines changed: 10 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -16,11 +16,6 @@ Ceph Storage
1616

1717
The Ceph deployment is not managed by StackHPC Ltd.
1818

19-
Troubleshooting
20-
===============
21-
22-
.. include:: include/ceph_troubleshooting.rst
23-
2419
Working with Ceph deployment tool
2520
=================================
2621

@@ -31,3 +26,13 @@ Working with Ceph deployment tool
3126
.. ifconfig:: deployment['cephadm']
3227

3328
.. include:: include/cephadm.rst
29+
30+
Operations
31+
==========
32+
33+
.. include:: include/ceph_operations.rst
34+
35+
Troubleshooting
36+
===============
37+
38+
.. include:: include/ceph_troubleshooting.rst

source/include/ceph_operations.rst

Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
2+
3+
Replacing drive
4+
---------------
5+
6+
See upstream documentation:
7+
https://docs.ceph.com/en/quincy/cephadm/services/osd/#replacing-an-osd
8+
9+
In case where disk holding DB and/or WAL fails, it is necessary to recreate
10+
(using replacement procedure above) all OSDs that are associated with this
11+
disk - usually NVMe drive. The following single command is sufficient to
12+
identify which OSDs are tied to which physical disks:
13+
14+
.. code-block:: console
15+
16+
ceph# ceph device ls
17+
18+
Host maintenance
19+
----------------
20+
21+
https://docs.ceph.com/en/quincy/cephadm/host-management/#maintenance-mode
22+
23+
Upgrading
24+
---------
25+
26+
https://docs.ceph.com/en/quincy/cephadm/upgrade/

source/include/cephadm.rst

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -34,14 +34,17 @@ cephadm based playbooks utilising stackhpc.cephadm Ansible Galaxy collection.
3434
Running Ceph commands
3535
=====================
3636

37-
Ceph commands can be run via ``cephadm shell`` utility container:
37+
Ceph commands are usually run inside a ``cephadm shell`` utility container:
3838

3939
.. code-block:: console
4040
4141
ceph# cephadm shell
4242
43-
This command will be only successful on ``mons`` group members (the admin key
44-
is copied only to those nodes).
43+
Operating a cluster requires a keyring with an admin access to be available for Ceph
44+
commands. Cephadm will copy such keyring to the nodes carrying
45+
`_admin <https://docs.ceph.com/en/quincy/cephadm/host-management/#special-host-labels>`__
46+
label - present on MON servers by default when using
47+
`StackHPC Cephadm collection <https://github.com/stackhpc/ansible-collection-cephadm>`__.
4548

4649
Adding a new storage node
4750
=========================

0 commit comments

Comments
 (0)