|
| 1 | +title: "Download the latest |oldversion| binaries." |
| 2 | +level: 4 |
| 3 | +ref: 4.2-downgrade-binaries-sharded-cluster |
| 4 | +content: | |
| 5 | + Using either a package manager or a manual download, get the latest |
| 6 | + release in the |oldversion| series. If using a package manager, add a new |
| 7 | + repository for the |oldversion| binaries, then perform the actual downgrade |
| 8 | + process. |
| 9 | +
|
| 10 | + .. include:: /includes/downgrade-path.rst |
| 11 | +--- |
| 12 | +title: Disable the Balancer. |
| 13 | +level: 4 |
| 14 | +ref: disable-balancer |
| 15 | +content: | |
| 16 | + Connect a :binary:`~bin.mongo` shell to a :binary:`~bin.mongos` instance in |
| 17 | + the sharded cluster, and run :method:`sh.stopBalancer()` to |
| 18 | + disable the balancer: |
| 19 | +
|
| 20 | + .. code-block:: javascript |
| 21 | +
|
| 22 | + sh.stopBalancer() |
| 23 | +
|
| 24 | + .. note:: |
| 25 | +
|
| 26 | + If a migration is in progress, the system will complete the |
| 27 | + in-progress migration before stopping the balancer. You can run |
| 28 | + :method:`sh.isBalancerRunning()` to check the balancer's current |
| 29 | + state. |
| 30 | +
|
| 31 | + To verify that the balancer is disabled, run |
| 32 | + :method:`sh.getBalancerState()`, which returns false if the balancer |
| 33 | + is disabled: |
| 34 | +
|
| 35 | + .. code-block:: javascript |
| 36 | +
|
| 37 | + sh.getBalancerState() |
| 38 | +
|
| 39 | + For more information on disabling the balancer, see |
| 40 | + :ref:`sharding-balancing-disable-temporarily`. |
| 41 | +
|
| 42 | +--- |
| 43 | +title: "Downgrade the ``mongos`` instances." |
| 44 | +level: 4 |
| 45 | +ref: downgrade-mongos |
| 46 | +content: | |
| 47 | + Downgrade the binaries and restart. |
| 48 | +--- |
| 49 | +title: Downgrade each shard, one at a time. |
| 50 | +level: 4 |
| 51 | +ref: 4.2-downgrade-shard |
| 52 | +content: | |
| 53 | +
|
| 54 | + Downgrade the shards one at a time. If the shards are replica sets, for each shard: |
| 55 | +
|
| 56 | + 1. Downgrade the :ref:`secondary <replica-set-secondary-members>` |
| 57 | + members of the replica set one at a time: |
| 58 | +
|
| 59 | + a. Shut down the :binary:`~bin.mongod` instance and replace the |newversion| |
| 60 | + binary with the |oldversion| binary. |
| 61 | +
|
| 62 | + #. Start the |oldversion| binary with the ``--shardsvr`` and |
| 63 | + ``--port`` command line options. Include any other |
| 64 | + configuration as appropriate for your deployment, e.g. |
| 65 | + ``--bind_ip``. |
| 66 | + |
| 67 | + .. code-block:: sh |
| 68 | +
|
| 69 | + mongod --shardsvr --port <port> --dbpath <path> --bind_ip localhost,<ip address> |
| 70 | +
|
| 71 | + Or if using a :doc:`configuration file |
| 72 | + </reference/configuration-options>`, update the file to |
| 73 | + include :setting:`sharding.clusterRole: shardsvr |
| 74 | + <sharding.clusterRole>`, :setting:`net.port`, and any other |
| 75 | + configuration as appropriate for your deployment, e.g. |
| 76 | + :setting:`net.bindIp`, and start: |
| 77 | +
|
| 78 | + .. code-block:: yaml |
| 79 | +
|
| 80 | + sharding: |
| 81 | + clusterRole: shardsvr |
| 82 | + net: |
| 83 | + port: <port> |
| 84 | + bindIp: localhost,<ip address> |
| 85 | + storage: |
| 86 | + dbpath: <path> |
| 87 | +
|
| 88 | +
|
| 89 | + #. Wait for the member to recover to ``SECONDARY`` state before |
| 90 | + downgrading the next secondary member. To check the member's |
| 91 | + state, you can issue :method:`rs.status()` in the |
| 92 | + :binary:`~bin.mongo` shell. |
| 93 | +
|
| 94 | + Repeat for each secondary member. |
| 95 | +
|
| 96 | + #. Step down the replica set primary. |
| 97 | +
|
| 98 | + Connect a :binary:`~bin.mongo` shell to the primary and use |
| 99 | + :method:`rs.stepDown()` to step down the primary and force an |
| 100 | + election of a new primary: |
| 101 | +
|
| 102 | + .. code-block:: javascript |
| 103 | +
|
| 104 | + rs.stepDown() |
| 105 | +
|
| 106 | + #. When :method:`rs.status()` |
| 107 | + shows that the primary has stepped down and another member |
| 108 | + has assumed ``PRIMARY`` state, downgrade the stepped-down primary: |
| 109 | +
|
| 110 | + 1. Shut down the stepped-down primary and replace the |
| 111 | + :binary:`~bin.mongod` binary with the |oldversion| binary. |
| 112 | +
|
| 113 | + #. Start the |oldversion| binary with the ``--shardsvr`` and |
| 114 | + ``--port`` command line options. Include any other |
| 115 | + configuration as appropriate for your deployment, e.g. |
| 116 | + ``--bind_ip``. |
| 117 | + |
| 118 | + .. code-block:: sh |
| 119 | +
|
| 120 | + mongod --shardsvr --port <port> --dbpath <path> --bind_ip localhost,<ip address> |
| 121 | +
|
| 122 | + Or if using a :doc:`configuration file |
| 123 | + </reference/configuration-options>`, update the file to |
| 124 | + include :setting:`sharding.clusterRole: shardsvr |
| 125 | + <sharding.clusterRole>`, :setting:`net.port`, and any other |
| 126 | + configuration as appropriate for your deployment, e.g. |
| 127 | + :setting:`net.bindIp`, and start the |oldversion| binary: |
| 128 | +
|
| 129 | + .. code-block:: yaml |
| 130 | +
|
| 131 | + sharding: |
| 132 | + clusterRole: shardsvr |
| 133 | + net: |
| 134 | + port: <port> |
| 135 | + bindIp: localhost,<ip address> |
| 136 | + storage: |
| 137 | + dbpath: <path> |
| 138 | +--- |
| 139 | +title: "Downgrade the config servers." |
| 140 | +level: 4 |
| 141 | +ref: 4.2-downgrade-config-servers |
| 142 | +content: |- |
| 143 | + If the config servers are replica sets: |
| 144 | + |
| 145 | + 1. Downgrade the :ref:`secondary <replica-set-secondary-members>` |
| 146 | + members of the replica set one at a time: |
| 147 | +
|
| 148 | + a. Shut down the secondary :binary:`~bin.mongod` instance and replace |
| 149 | + the |newversion| binary with the |oldversion| binary. |
| 150 | +
|
| 151 | + #. Start the |oldversion| binary with both the ``--configsvr`` and |
| 152 | + ``--port`` options. Include any other |
| 153 | + configuration as appropriate for your deployment, e.g. |
| 154 | + ``--bind_ip``. |
| 155 | +
|
| 156 | + .. code-block:: sh |
| 157 | +
|
| 158 | + mongod --configsvr --port <port> --dbpath <path> --bind_ip localhost,<ip address> |
| 159 | +
|
| 160 | + If using a :doc:`configuration file |
| 161 | + </reference/configuration-options>`, update the file to |
| 162 | + specify :setting:`sharding.clusterRole: configsvr |
| 163 | + <sharding.clusterRole>`, :setting:`net.port`, and any other |
| 164 | + configuration as appropriate for your deployment, e.g. |
| 165 | + :setting:`net.bindIp`, and start the |oldversion| binary: |
| 166 | +
|
| 167 | + .. code-block:: yaml |
| 168 | +
|
| 169 | + sharding: |
| 170 | + clusterRole: configsvr |
| 171 | + net: |
| 172 | + port: <port> |
| 173 | + bindIp: localhost,<ip address> |
| 174 | + storage: |
| 175 | + dbpath: <path> |
| 176 | +
|
| 177 | + Include any other configuration as appropriate for your deployment. |
| 178 | +
|
| 179 | + #. Wait for the member to recover to ``SECONDARY`` state before |
| 180 | + downgrading the next secondary member. To check the member's state, |
| 181 | + issue :method:`rs.status()` in the :binary:`~bin.mongo` shell. |
| 182 | +
|
| 183 | + Repeat for each secondary member. |
| 184 | +
|
| 185 | + #. Step down the replica set primary. |
| 186 | +
|
| 187 | + a. Connect a :binary:`~bin.mongo` shell to the primary and use |
| 188 | + :method:`rs.stepDown()` to step down the primary and force an |
| 189 | + election of a new primary: |
| 190 | +
|
| 191 | + .. code-block:: javascript |
| 192 | +
|
| 193 | + rs.stepDown() |
| 194 | +
|
| 195 | + #. When :method:`rs.status()` shows that the primary has stepped |
| 196 | + down and another member has assumed ``PRIMARY`` state, shut down |
| 197 | + the stepped-down primary and replace the :binary:`~bin.mongod` binary |
| 198 | + with the |oldversion| binary. |
| 199 | +
|
| 200 | + #. Start the |oldversion| binary with both the ``--configsvr`` and |
| 201 | + ``--port`` options. Include any other |
| 202 | + configuration as appropriate for your deployment, e.g. |
| 203 | + ``--bind_ip``. |
| 204 | +
|
| 205 | + .. code-block:: sh |
| 206 | +
|
| 207 | + mongod --configsvr --port <port> --dbpath <path> --bind_ip localhost,<ip address> |
| 208 | +
|
| 209 | + If using a :doc:`configuration file |
| 210 | + </reference/configuration-options>`, update the file to |
| 211 | + specify :setting:`sharding.clusterRole: configsvr |
| 212 | + <sharding.clusterRole>`, :setting:`net.port`, and any other |
| 213 | + configuration as appropriate for your deployment, e.g. |
| 214 | + :setting:`net.bindIp`, and start the |oldversion| binary: |
| 215 | +
|
| 216 | + .. code-block:: yaml |
| 217 | +
|
| 218 | + sharding: |
| 219 | + clusterRole: configsvr |
| 220 | + net: |
| 221 | + port: <port> |
| 222 | + bindIp: localhost,<ip address> |
| 223 | + storage: |
| 224 | + dbpath: <path> |
| 225 | +
|
| 226 | +--- |
| 227 | +title: "Re-enable the balancer." |
| 228 | +level: 4 |
| 229 | +ref: reenable-balancer |
| 230 | +content: | |
| 231 | + Once the downgrade of sharded cluster components is |
| 232 | + complete, :ref:`re-enable the balancer <sharding-balancing-enable>`. |
| 233 | +... |
0 commit comments