|
| 1 | +.. _recover-om-appdb-deployments: |
| 2 | + |
| 3 | +=========================================================================== |
| 4 | +Recover the |k8s-op-short| and |onprem| for Multi-Cluster AppDB Deployments |
| 5 | +=========================================================================== |
| 6 | + |
| 7 | +.. default-domain:: mongodb |
| 8 | + |
| 9 | +.. contents:: On this page |
| 10 | + :local: |
| 11 | + :backlinks: none |
| 12 | + :depth: 1 |
| 13 | + :class: singlecol |
| 14 | + |
| 15 | +If you host an |onprem| resource in the same |k8s| cluster as |
| 16 | +the |k8s-op-short| and have the Application Database (AppDB) |
| 17 | +deployed on selected member clusters in your |multi-cluster|, |
| 18 | +you can manually recover the |k8s-op-short| and |onprem| |
| 19 | +in the event that the cluster fails. |
| 20 | + |
| 21 | +To learn more about deploying |onprem| on a central |
| 22 | +cluster and the Application Database across member clusters, |
| 23 | +see :ref:`om_with_multi-clusters`. |
| 24 | + |
| 25 | +Prerequisites |
| 26 | +------------- |
| 27 | + |
| 28 | +Before you can recover the |k8s-op-short| and |onprem|, ensure |
| 29 | +that you meet the following requirements: |
| 30 | + |
| 31 | +- Configure backups for your |onprem| and |
| 32 | + Application Database resources, including any |
| 33 | + |k8s-configmaps| and |k8s-secrets| created by the |k8s-op-short|, |
| 34 | + to indicate the previous running state of |onprem|. |
| 35 | + To learn more, see :ref:`om-rsrc-backup`. |
| 36 | + |
| 37 | +- The Application Database must have at least three healthy |
| 38 | + nodes remaining after failure of the |k8s-op-short|'s cluster. |
| 39 | + |
| 40 | +- The healthy clusters in your |multi-cluster| must contain |
| 41 | + a sufficient number of members to elect a primary node. |
| 42 | + To learn more, see :ref:`appdb-architecture`. |
| 43 | + |
| 44 | +Considerations |
| 45 | +-------------- |
| 46 | + |
| 47 | +.. _appdb-architecture: |
| 48 | + |
| 49 | +Application Database Architecture |
| 50 | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 51 | + |
| 52 | +Because the |k8s-op-short| doesn't support forcing a replica set |
| 53 | +reconfiguration, the healthy |k8s| clusters |
| 54 | +must contain a sufficient number of Application Database members to elect a primary node |
| 55 | +for this manual recovery process. A majority of the Application Database |
| 56 | +members must be available to elect a primary. To learn more, see |
| 57 | +:manual:`Replica Set Deployment Architectures </core/replica-set-architectures/>`. |
| 58 | + |
| 59 | +If possible, use an odd number of member |k8s| clusters. Proper distribution of your |
| 60 | +Application Database members can help to maximize the likelihood that |
| 61 | +the remaining replica set members can form a majority during an outage. |
| 62 | +To learn more, see :manual:`Replica Sets Distributed Across Two or More Data Centers |
| 63 | +</core/replica-set-architecture-geographically-distributed/>`. |
| 64 | + |
| 65 | +Consider the following examples: |
| 66 | + |
| 67 | +.. tabs:: |
| 68 | + |
| 69 | + .. tab:: Five-member Application Database |
| 70 | + :tabid: five-member |
| 71 | + |
| 72 | + For a five-member Application Database, some possible distributions of members include: |
| 73 | + |
| 74 | + - Two clusters: three members to Cluster 1 and two members to Cluster 2. |
| 75 | + |
| 76 | + - If Cluster 2 fails, there are enough members on Cluster 1 to elect a primary node. |
| 77 | + - If Cluster 1 fails, there are not enough members on Cluster 2 to elect a primary node. |
| 78 | + |
| 79 | + - Three clusters: two members to Cluster 1, two members to Cluster 2, and one member to Cluster 3. |
| 80 | + |
| 81 | + - If any single cluster fails, there are enough members on the remaining clusters to elect a primary node. |
| 82 | + - If two clusters fail, there are not enough members on any remaining cluster to elect a primary node. |
| 83 | + |
| 84 | + .. tab:: Seven-member Application Database |
| 85 | + :tabid: seven-member |
| 86 | + |
| 87 | + For a seven-member Application Database, consider the following distribution of members: |
| 88 | + |
| 89 | + - Two clusters: four members to Cluster 1 and three members to Cluster 2. |
| 90 | + |
| 91 | + - If Cluster 2 fails, there are enough members on Cluster 1 to elect a primary node. |
| 92 | + - If Cluster 1 fails, there are not enough members on Cluster 2 to elect a primary node. |
| 93 | + |
| 94 | + Although Cluster 2 meets the three member minimum for the Application Database, |
| 95 | + a majority of the Application Database's seven members must be available |
| 96 | + to elect a primary node. |
| 97 | + |
| 98 | +------------ |
| 99 | + |
| 100 | +Procedure |
| 101 | +--------- |
| 102 | + |
| 103 | +To recover the |k8s-op-short| and |onprem|, |
| 104 | +restore the |onprem| resource on a new |k8s| cluster: |
| 105 | + |
| 106 | +.. include:: /includes/steps/recover-k8s-om-multi-appdb-deployments.rst |
0 commit comments