@@ -77,7 +77,7 @@ MongoDB has the following features:
77
77
users to increase the potential amount of data to mange
78
78
with MongoDB and expand the :term:`working set`.
79
79
80
- A typical :term:`shard cluster` consists of config servers that
80
+ A typical :term:`shard cluster` consists of config servers which
81
81
store metadata that maps :term:`chunks <chunk>` to shards, the
82
82
:program:`mongod` instances which hold data (i.e the :term:`shards
83
83
<shard>`,) and lightweight routing processes, :doc:`mongos
@@ -89,7 +89,7 @@ Indications
89
89
90
90
While sharding is a powerful and compelling feature, it comes with
91
91
significant :ref:`infrastructure requirements <sharding-requirements>`
92
- and some limited complexity costs. As a result, its important to use
92
+ and some limited complexity costs. As a result, it's important to use
93
93
sharding only as necessary, and when indicated by actual operational
94
94
requirements. Consider the following overview of indications it may be
95
95
time to consider sharding.
@@ -108,8 +108,8 @@ You should consider deploying a :term:`shard cluster`, if:
108
108
109
109
If these attributes are not present in your system, sharding will only
110
110
add additional complexity to your system without providing much benefit.
111
- If you do plan to eventually partition your data, you should also
112
- give some thought to which collections you'll want to shard along with
111
+ If you plan to eventually partition your data, you should
112
+ consider which collections you will want to shard along with
113
113
the corresponding shard keys.
114
114
115
115
.. _sharding-capacity-planning:
@@ -122,7 +122,7 @@ the corresponding shard keys.
122
122
application.
123
123
124
124
As a result, if you think you're going to need sharding eventually,
125
- its crucial that you **do not** wait until your system is
125
+ it's critical that you **do not** wait until your system is
126
126
overcapacity to enable sharding.
127
127
128
128
.. index:: sharding; requirements
@@ -143,7 +143,7 @@ A :term:`shard cluster` has the following components:
143
143
These special :program:`mongod` instances store the metadata for the
144
144
cluster. The :program:`mongos` instances cache this data and use it
145
145
to determine which :term:`shard` is responsible for which
146
- :term:`chunk`.
146
+ :term:`chunk`.
147
147
148
148
For testing purposes you may deploy a shard cluster with a single
149
149
configuration server, but this is not recommended for production.
@@ -158,8 +158,8 @@ A :term:`shard cluster` has the following components:
158
158
These are "normal," :program:`mongod` instances that hold all of the
159
159
actual data for the cluster.
160
160
161
- Typically, a :term:`replica sets <replica set>`, consisting of
162
- multiple :program:`mongod` instances, compose a shard. The members
161
+ Typically, one or more :term:`replica sets <replica set>`, consisting of
162
+ multiple :program:`mongod` instances, compose a shard cluster . The members
163
163
of the replica set provide redundancy for the data and increase the
164
164
overall reliability and robustness of the cluster.
165
165
@@ -182,7 +182,7 @@ A :term:`shard cluster` has the following components:
182
182
resources, and you can run them on your application servers
183
183
without impacting application performance. However, if you use
184
184
the :term:`aggregation framework` some processing may occur on
185
- the :program:`mongos` instances that causes them to require more
185
+ the :program:`mongos` instances which can cause them to require more
186
186
system resources.
187
187
188
188
Data
@@ -300,21 +300,30 @@ help produce one that is more ideal.
300
300
Config Servers
301
301
--------------
302
302
303
- The configuration servers store the shard metadata that tracks the
304
- relationship between the range that defines a :term:`chunk` and the
305
- :program:`mongod` instance (typically a :term:`replica set`) or
306
- :term:`shard` where that data resides . Without a config server , the
303
+ Config servers maintain the shard metadata in a config
304
+ database. The :term:`config database <config database>` stores
305
+ the relationship between :term:`chunks <chunk>` and where they reside
306
+ within a :term:`shard cluster` . Without a config database , the
307
307
:program:`mongos` instances would be unable to route queries or write
308
- operations within the cluster. This section describes their operation
309
- and use.
308
+ operations within the cluster.
310
309
311
310
Config servers *do not* run as replica sets. Instead, a :term:`shard
312
- cluster` operates with a group of *three* config servers that use a
311
+ cluster` operates with a group of *three* config servers which use a
313
312
two-phase commit process that ensures immediate consistency and
314
- reliability. Because the :program:`mongos` instances all maintain
315
- caches of the config server data, the actual load on the config
316
- servers is small. MongoDB will write data to the config server only
317
- when:
313
+ reliability.
314
+
315
+ For testing purposes you may deploy a shard cluster with a single
316
+ config server, but this is not recommended for production.
317
+
318
+ .. warning::
319
+
320
+ If you choose to run a single config server and it becomes
321
+ inoperable for any reason, the cluster will be unusable.
322
+
323
+ The actual load on configuration servers is small because each
324
+ :program:`mongos` instances maintains a cached copy of the configuration
325
+ database.
326
+ MongoDB will write data to the config server only when:
318
327
319
328
- Creating splits in existing chunks, which happens as data in
320
329
existing chunks exceeds the maximum chunk size.
@@ -344,10 +353,17 @@ Because the configuration data is small relative to the amount of data
344
353
stored in a cluster, the amount of activity is relatively low, and 100%
345
354
up time is not required for a functioning shard cluster. As a result,
346
355
backing up the config servers is not difficult. Backups of config
347
- servers are crucial as shard clusters become totally inoperable when
356
+ servers are critical as shard clusters become totally inoperable when
348
357
you lose all configuration instances and data. Precautions to ensure
349
358
that the config servers remain available and intact are critical.
350
359
360
+ .. note::
361
+
362
+ Configuration servers maintain metadata for only one shard cluster.
363
+ You must have a separate configuration server or servers for each
364
+ shard cluster you configure.
365
+
366
+
351
367
.. index:: mongos
352
368
.. _sharding-mongos:
353
369
@@ -458,7 +474,7 @@ have on the cluster, by:
458
474
<sharding-migration-thresholds>`.
459
475
460
476
Additionally, it's possible to disable the balancer on a temporary
461
- basis for maintenance and limit the window during which it runs to
477
+ basis for maintenance and to limit the window during which it runs to
462
478
prevent the balancing process from impacting production traffic.
463
479
464
480
.. seealso:: The ":ref:`"Balancing Internals
0 commit comments