File tree Expand file tree Collapse file tree 2 files changed +36
-5
lines changed Expand file tree Collapse file tree 2 files changed +36
-5
lines changed Original file line number Diff line number Diff line change @@ -16,11 +16,6 @@ Ceph Storage
16
16
17
17
The Ceph deployment is not managed by StackHPC Ltd.
18
18
19
- Troubleshooting
20
- ===============
21
-
22
- .. include :: include/ceph_troubleshooting.rst
23
-
24
19
Working with Ceph deployment tool
25
20
=================================
26
21
@@ -31,3 +26,13 @@ Working with Ceph deployment tool
31
26
.. ifconfig :: deployment['cephadm']
32
27
33
28
.. include :: include/cephadm.rst
29
+
30
+ Operations
31
+ ==========
32
+
33
+ .. include :: include/ceph_operations.rst
34
+
35
+ Troubleshooting
36
+ ===============
37
+
38
+ .. include :: include/ceph_troubleshooting.rst
Original file line number Diff line number Diff line change
1
+
2
+
3
+ Replacing drive
4
+ ---------------
5
+
6
+ See upstream documentation:
7
+ https://docs.ceph.com/en/quincy/cephadm/services/osd/#replacing-an-osd
8
+
9
+ In case where disk holding DB and/or WAL fails, it is necessary to recreate
10
+ (using replacement procedure above) all OSDs that are associated with this
11
+ disk - usually NVMe drive. The following single command is sufficient to
12
+ identify which OSDs are tied to which physical disks:
13
+
14
+ .. code-block :: console
15
+
16
+ ceph# ceph device ls
17
+
18
+ Host maintenance
19
+ ----------------
20
+
21
+ https://docs.ceph.com/en/quincy/cephadm/host-management/#maintenance-mode
22
+
23
+ Upgrading
24
+ ---------
25
+
26
+ https://docs.ceph.com/en/quincy/cephadm/upgrade/
You can’t perform that action at this time.
0 commit comments