Skip to content

Commit c8870b2

Browse files
author
Sam Kleinman
committed
DOCS-240: replication review counter-edits
1 parent b0074e5 commit c8870b2

File tree

3 files changed

+60
-60
lines changed

3 files changed

+60
-60
lines changed

source/administration/replica-sets.txt

Lines changed: 26 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -4,11 +4,12 @@ Replica Set Administration
44

55
.. default-domain:: mongodb
66

7-
:term:`Replica sets <replica set>` automate the vast majority of the
8-
administrative tasks associated with database replication and
9-
management. However, administrators must perform some tasks manually.
10-
This document gives an overview of those tasks. This document also
11-
provides troubleshooting suggestions.
7+
:term:`Replica sets <replica set>` automate most
8+
administrative tasks associated with database replication.
9+
However, some tasks related to deployment and system management still
10+
require administrator intervention. This document provides an overview
11+
of those those tasks as well as a collection of troubleshooting
12+
suggestions for administers of replica sets.
1213

1314
.. seealso::
1415

@@ -39,10 +40,10 @@ the following to prepare the new member's :term:`data directory <dbpath>`:
3940
- Make sure the new member's data directory *does not* contain data. The
4041
new member will copy the data directory from an existing member.
4142

42-
If the new member is in ":term:`recovering`" status, it must exit
43-
recovering status and become a :term:`secondary` member before MongoDB
44-
can copy all data as part of the replication process. This process can
45-
be time intensive but does not require administrator intervention.
43+
If the new member is in a ":term:`recovering`" state, it must exit
44+
become a :term:`secondary` before MongoDB
45+
can copy all data as part of the replication process. This process
46+
takes time but does not require administrator intervention.
4647

4748
- Manually copy the data directory from an existing member. The new
4849
member becomes a secondary member and will catch up to the current
@@ -51,7 +52,7 @@ the following to prepare the new member's :term:`data directory <dbpath>`:
5152
current.
5253

5354
Ensure that you can copy the data directory to the new member and
54-
begin replication within the window allowed by the oplog. If the
55+
begin replication within the :ref:`window allowed by the oplog <replica-set-oplog-sizing>`. If the
5556
difference in the amount of time between the most recent operation and
5657
the most recent operation to the database exceeds the length of the
5758
:term:`oplog` on the existing members, then the new instance will have
@@ -168,12 +169,12 @@ There are two processes for replacing a member of a :term:`replica set`:
168169
.. warning::
169170

170171
Any replica set configuration change can trigger the current
171-
:term:`primary` to step down, forcing an :term:`election`. This
172-
causes the current shell session to produce an error even when the
173-
operation succeeds. Also, clients connected to this replica set will
174-
disconnect.
172+
:term:`primary` to step down, which forces an :term:`election`. This
173+
causes the current shell session, and clients connected to this replica set,
174+
to produce an error even when the operation succeeds.
175175

176176
.. _replica-set-node-priority-configuration:
177+
.. _replica-set-member-priority-configuration:
177178

178179
Adjusting Priority
179180
~~~~~~~~~~~~~~~~~~~~
@@ -205,8 +206,7 @@ elections. :ref:`Hidden members <replica-set-hidden-members>`,
205206
:ref:`arbiters <replica-set-arbiters>` all have :data:`members[n].priority`
206207
set to ``0``.
207208

208-
Unless configured otherwise, all members have a :data:`members[n].priority`
209-
set to ``1``.
209+
All members have a :data:`members[n].priority` equal to ``1`` by default.
210210

211211
The value of :data:`members[n].priority` can be any floating point
212212
(i.e. decimal) number between ``0`` and ``1000``. Priorities
@@ -243,31 +243,32 @@ cases the :ref:`default oplog size <replica-set-oplog-sizing>` is an
243243
acceptable size; however, in some situations you may need a larger or
244244
smaller oplog. To resize the oplog, follow these steps:
245245

246-
1. Restart the current :term:`primary` instance in the :term:`replica set` in
246+
#. Restart the current :term:`primary` instance in the :term:`replica set` in
247247
"standalone" mode, running on a different port.
248248

249-
2. Save the last entry from the old (current) oplog and create a
249+
#. Save the last entry from the old (current) oplog and create a
250250
backup of the oplog.
251251

252-
3. Drop the old oplog and create a new oplog of a different size.
252+
#. Drop the existing oplog and create a new oplog of a different size.
253253

254-
4. Insert the previously saved last entry from the old oplog into the
254+
#. Insert the previously saved last entry from the old oplog into the
255255
new oplog.
256256

257-
5. Restart the server as a member of the replica set on its usual
257+
#. Restart the server as a member of the replica set on its usual
258258
port.
259259

260-
6. Apply this procedure to any other member of the replica set that
260+
#. Apply this procedure to any other member of the replica set that
261261
*could become* primary.
262262

263263
.. seealso:: The ":doc:`/tutorial/change-oplog-size`" tutorial.
264264

265265
.. _replica-set-node-configurations:
266+
.. _replica-set-member-configurations:
266267

267268
Member Configurations
268269
---------------------
269270

270-
All :term:`replica set's` have a single :term:`primary` and one or more
271+
All :term:`replica sets <replica set>` have a single :term:`primary` and one or more
271272
:term:`secondaries <secondary>`. Replica sets sets allow you to configure
272273
secondary members in a variety of ways. This section describes these
273274
configurations.
@@ -345,7 +346,7 @@ operations in the :program:`mongo` shell:
345346
rs.reconfig(cfg)
346347

347348
After re-configuring the set, the member with the ``_id`` of ``0``
348-
has a priority of ``0`` so that it cannot become master. The
349+
has a priority of ``0`` so that it cannot become primary. The
349350
other members in the set will not advertise the hidden member in the
350351
:dbcommand:`isMaster` or :func:`db.isMaster()` output.
351352

@@ -479,8 +480,7 @@ shared key file that serves as a shared password.
479480

480481
.. versionadded:: 1.8 for replica sets (1.9.1 for sharded replica sets) added support for authentication.
481482

482-
To enable authentication using a key file for the replica set,
483-
add the following option to your configuration file:
483+
To enable authentication add the following option to your configuration file:
484484

485485
.. code-block:: cfg
486486

source/administration/replication-architectures.txt

Lines changed: 29 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -4,27 +4,28 @@ Replication Architectures
44

55
.. default-domain:: mongodb
66

7-
There is no single :term:`replica set` architecture that is ideal for
7+
There is no single ideal :term:`replica set` architecture for
88
every deployment or environment. Indeed the flexibility of replica sets
99
might be their greatest strength. This document describes the most
1010
commonly used deployment patterns for replica sets. The descriptions
11-
are necessarily not mutually exclusive and in some cases can be combined.
11+
are necessarily not mutually exclusive, and you can combine features
12+
of each architecture in your own deployment.
1213

1314
.. seealso:: :doc:`/administration/replica-sets` and
14-
:doc:`/reference/replica-configuration`
15+
:doc:`/reference/replica-configuration`.
1516

16-
Three-Member Sets
17-
------------------
17+
Three Member Sets
18+
-----------------
1819

19-
The minimum *recommended* architecture for a replica set consists of
20+
The minimum *recommended* architecture for a replica set consists of:
2021

21-
- One :term:`primary <primary>`
22+
- One :term:`primary <primary>` and
2223

2324
- Two :term:`secondary <secondary>` members, either of which can become
2425
the primary at any time.
2526

2627
This makes :ref:`failover <replica-set-failover>` possible and ensures
27-
there exists two full, independent copies of the data set at all
28+
there exists two full and independent copies of the data set at all
2829
times. If the primary fails, the replica set elects another member as
2930
primary and continues replication until the primary recovers.
3031

@@ -58,8 +59,8 @@ conditions are true:
5859
- Members that cannot function as primaries in a :term:`failover`
5960
have their :data:`priority <members[n].priority>` values set to ``0``.
6061

61-
If a member cannot function as a primary, for example because of
62-
resource constraints, a :data:`priority <members[n].priority>` value
62+
If a member cannot function as a primary because of
63+
resource or network latency constraints a :data:`priority <members[n].priority>` value
6364
of ``0`` prevents it from being a primary. Any member with a
6465
``priority`` value greater than ``0`` is available to be a primary.
6566

@@ -73,9 +74,7 @@ Geographically Distributed Sets
7374
-------------------------------
7475

7576
A geographically distributed replica set provides data recovery should
76-
the primary data center fail.
77-
78-
A geographically distributed set includes at least one member in a
77+
one data center fail. These sets include at least one member in a
7978
secondary data center. The member has its the :data:`priority
8079
<members[n].priority>` :ref:`set <replica-set-reconfiguration-usage>` to
8180
``0`` to prevent the member from ever becoming primary.
@@ -89,10 +88,10 @@ In many circumstances, these deployments consist of the following:
8988
This member can become the primary member at any time.
9089

9190
- One secondary member in a secondary data center. This member is
92-
ineligible to become primary. Its :data:`members[n].priority` value is
93-
set to ``0``.
91+
ineligible to become primary. Set its :data:`members[n].priority` to
92+
``0``.
9493

95-
If the primary member should fail, the replica set elects a new primary
94+
If the primary is unavailable, the replica set will elect a new primary
9695
from the primary data center.
9796

9897
If the *connection* between the primary and secondary data centers fails,
@@ -104,8 +103,8 @@ from the secondary data center. With proper :term:`write concern` there
104103
will be no data loss and downtime can be minimal.
105104

106105
When you add a secondary data center, make sure to keep an odd number of
107-
members overall to prevent ties during elections for primary. This can
108-
be done by deploying an :ref:`arbiter <replica-set-arbiters>` in your
106+
members overall to prevent ties during elections for primary by
107+
deploying an :ref:`arbiter <replica-set-arbiters>` in your
109108
primary data center. For example, if you have three members in the
110109
primary data center and add a member in a secondary center, you create
111110
an even number. To create an odd number and prevent ties, deploy an
@@ -175,18 +174,19 @@ Delayed Replication
175174
~~~~~~~~~~~~~~~~~~~
176175

177176
:term:`Delayed members <delayed member>` are special :program:`mongod`
178-
instances in a :term:`replica set` that function the same way as
179-
:term:`secondary` members with the following operational differences:
180-
they are not eligible for election to primary and do not receive
181-
secondary queries. Delayed members *do* vote in :term:`elections
182-
<election>` for primary.
183-
184-
Delayed members apply operations from the :term:`oplog` on a delay to
177+
instances in a :term:`replica set` that
178+
apply operations from the :term:`oplog` on a delay to
185179
provide a running "historical" snapshot of the data set, or a rolling
186180
backup. Typically these members provide protection against human error,
187181
such as unintentionally deleted databases and collections or failed
188182
application upgrades or migrations.
189183

184+
Otherwise, delayed member function identically to
185+
:term:`secondary` members, with the following operational differences:
186+
they are not eligible for election to primary and do not receive
187+
secondary queries. Delayed members *do* vote in :term:`elections
188+
<election>` for primary.
189+
190190
See :ref:`Replica Set Delayed Nodes <replica-set-delayed-members>` for
191191
more information about configuring delayed replica set members.
192192

@@ -237,8 +237,8 @@ primary *and* a quorum of voting members in the main facility.
237237

238238
.. _replica-set-arbiter-nodes:
239239

240-
Arbiter Nodes
241-
-------------
240+
Arbiters
241+
--------
242242

243243
Always deploy an :term:`arbiter` to ensure that a replica set will have
244244
a sufficient number of members to elect a :term:`primary`. While having
@@ -258,6 +258,6 @@ resource requirements and do not require dedicated hardware. Do not add
258258
an arbiter to a set if you have an odd number of voting members that hold
259259
data, to prevent tied :term:`elections <election>`.
260260

261-
.. seealso:: :ref:`Arbiter Nodes <replica-set-arbiters>`,
261+
.. seealso:: :ref:`Arbiters <replica-set-arbiters>`,
262262
:setting:`replSet`, :option:`mongod --replSet`, and
263-
:func:`rs.addArb()`.
263+
:func:`rs.addArb()`.

source/core/replication-internals.txt

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ database remains consistent. However, clients may modify the
4545
:ref:`read preferences <replica-set-read-preference>` on a
4646
per-connection basis in order to distribute read operations to the
4747
:term:`secondary` members of a :term:`replica set`. Read-heavy deployments may achieve
48-
greater query volumes by distributing reads to secondary members. But
48+
greater query throughput by distributing reads to secondary members. But
4949
keep in mind that replication is asynchronous; therefore, reads from
5050
secondaries may not always reflect the latest writes to the
5151
:term:`primary`. See the :ref:`consistency <replica-set-consistency>`
@@ -146,7 +146,7 @@ conditions:
146146

147147
- If the member seeking an election is not a member of the voter's set.
148148

149-
- If the member seeking an election is not up-to_date with the most
149+
- If the member seeking an election is not up-to-date with the most
150150
recent operation accessible in the replica set.
151151

152152
- If the member seeking an election has a lower priority than another member
@@ -194,17 +194,17 @@ aware of the following conditions and possible situations:
194194

195195

196196
.. seealso:: :ref:`Non-voting members in a replica
197-
set<replica-set-non-voting-members>`,
197+
set <replica-set-non-voting-members>`,
198198
:ref:`replica-set-node-priority-configuration`, and
199-
:data:`replica configuration <members[n].votes>`
199+
:data:`replica configuration <members[n].votes>`.
200200

201201
Syncing
202202
-------
203203

204204
In order to remain up-to-date with the current state of the :term:`replica set`,
205205
set members sync, or copy, :term:`oplog` entries from other members.
206206

207-
When a new member joins a set or when an existing member restarts, the
207+
When a new member joins a set or an existing member restarts, the
208208
member waits to receive heartbeats from other members. By
209209
default, the member syncs from the *the closest* member of the
210210
set that is either the primary or another secondary with more recent

0 commit comments

Comments
 (0)