Skip to content

Commit c75cb05

Browse files
committed
Resctructure Ceph paragraphs
1 parent 1259256 commit c75cb05

File tree

2 files changed

+36
-5
lines changed

2 files changed

+36
-5
lines changed

source/ceph_storage.rst

Lines changed: 10 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -16,11 +16,6 @@ Ceph Storage
1616

1717
The Ceph deployment is not managed by StackHPC Ltd.
1818

19-
Troubleshooting
20-
===============
21-
22-
.. include:: include/ceph_troubleshooting.rst
23-
2419
Working with Ceph deployment tool
2520
=================================
2621

@@ -31,3 +26,13 @@ Working with Ceph deployment tool
3126
.. ifconfig:: deployment['cephadm']
3227

3328
.. include:: include/cephadm.rst
29+
30+
Operations
31+
==========
32+
33+
.. include:: include/ceph_operations.rst
34+
35+
Troubleshooting
36+
===============
37+
38+
.. include:: include/ceph_troubleshooting.rst

source/include/ceph_operations.rst

Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
2+
3+
Replacing drive
4+
---------------
5+
6+
See upstream documentation:
7+
https://docs.ceph.com/en/quincy/cephadm/services/osd/#replacing-an-osd
8+
9+
In case where disk holding DB and/or WAL fails, it is necessary to recreate
10+
(using replacement procedure above) all OSDs that are associated with this
11+
disk - usually NVMe drive. The following single command is sufficient to
12+
identify which OSDs are tied to which physical disks:
13+
14+
.. code-block:: console
15+
16+
ceph# ceph device ls
17+
18+
Host maintenance
19+
----------------
20+
21+
https://docs.ceph.com/en/quincy/cephadm/host-management/#maintenance-mode
22+
23+
Upgrading
24+
---------
25+
26+
https://docs.ceph.com/en/quincy/cephadm/upgrade/

0 commit comments

Comments
 (0)