@@ -22,14 +22,12 @@ deployment.
22
22
Data Durability
23
23
~~~~~~~~~~~~~~~
24
24
25
- .. cssclass:: checklist
25
+ - Ensure that your replica set includes at least three data-bearing voting
26
+ members and that your write operations use ``w: majority`` :doc:`write
27
+ concern </reference/write-concern>`. Three data-bearing voting members are
28
+ required for replica-set wide data durability.
26
29
27
- - Ensure that your replica set includes at least three data-bearing voting
28
- members and that your write operations use ``w: majority`` :doc:`write
29
- concern </reference/write-concern>`. Three data-bearing voting members are
30
- required for replica-set wide data durability.
31
-
32
- - Ensure that all instances use :ref:`journaling <journaling-internals>`.
30
+ - Ensure that all instances use :ref:`journaling <journaling-internals>`.
33
31
34
32
Schema Design
35
33
~~~~~~~~~~~~~
@@ -40,117 +38,109 @@ facilitates iterative development and polymorphism. Nevertheless,
40
38
collections often hold documents with highly homogeneous
41
39
structures. See :doc:`/core/data-models` for more information.
42
40
43
- .. cssclass:: checklist
44
-
45
- - Determine the set of collections that you will need and the
46
- indexes required to support your queries. With the exception of
47
- the ``_id`` index, you must create all indexes explicitly: MongoDB
48
- does not automatically create any indexes other than ``_id``.
41
+ - Determine the set of collections that you will need and the
42
+ indexes required to support your queries. With the exception of
43
+ the ``_id`` index, you must create all indexes explicitly: MongoDB
44
+ does not automatically create any indexes other than ``_id``.
49
45
50
- - Ensure that your schema design supports your deployment type: if
51
- you are planning to use :term:`sharded clusters <sharded cluster>`
52
- for horizontal scaling, design your schema to include a strong
53
- shard key. While you can :ref:`change your shard key
54
- <change-a-shard-key>` later, it is important to carefully consider
55
- your :ref:`shard key choice <sharding-shard-key-requirements>` to
56
- avoid scalability and perfomance issues.
46
+ - Ensure that your schema design supports your deployment type: if
47
+ you are planning to use :term:`sharded clusters <sharded cluster>`
48
+ for horizontal scaling, design your schema to include a strong
49
+ shard key. While you can :ref:`change your shard key
50
+ <change-a-shard-key>` later, it is important to carefully consider
51
+ your :ref:`shard key choice <sharding-shard-key-requirements>` to
52
+ avoid scalability and perfomance issues.
57
53
58
- - Ensure that your schema design does not rely on indexed arrays that
59
- grow in length without bound. Typically, best performance can
60
- be achieved when such indexed arrays have fewer than 1000 elements.
54
+ - Ensure that your schema design does not rely on indexed arrays that
55
+ grow in length without bound. Typically, best performance can
56
+ be achieved when such indexed arrays have fewer than 1000 elements.
61
57
62
- - Consider the document size limits when designing your schema.
63
- The :limit:`BSON Document Size` limit is 16MB per document. If
64
- you require larger documents, use :ref:`GridFS <gridfs>`.
58
+ - Consider the document size limits when designing your schema.
59
+ The :limit:`BSON Document Size` limit is 16MB per document. If
60
+ you require larger documents, use :ref:`GridFS <gridfs>`.
65
61
66
62
Replication
67
63
~~~~~~~~~~~
68
64
69
- .. cssclass:: checklist
65
+ - Use an odd number of voting members to ensure that elections
66
+ proceed successfully. You can have up to 7 voting members. If you
67
+ have an *even* number of voting members, and constraints, such as
68
+ cost, prohibit adding another secondary to be a voting member, you
69
+ can add an :term:`arbiter` to ensure an odd number of votes. For
70
+ additional considerations when using an arbiter for a 3-member
71
+ replica set (P-S-A), see :doc:`/core/replica-set-arbiter`.
70
72
71
- - Use an odd number of voting members to ensure that elections
72
- proceed successfully. You can have up to 7 voting members. If you
73
- have an *even* number of voting members, and constraints, such as
74
- cost, prohibit adding another secondary to be a voting member, you
75
- can add an :term:`arbiter` to ensure an odd number of votes. For
76
- additional considerations when using an arbiter for a 3-member
77
- replica set (P-S-A), see :doc:`/core/replica-set-arbiter`.
73
+ .. note::
78
74
79
- .. note::
75
+ .. include:: /includes/extracts/arbiters-and-pvs-with-reference.rst
80
76
81
- .. include:: /includes/extracts/arbiters-and-pvs-with-reference.rst
77
+ - Ensure that your secondaries remain up-to-date by using
78
+ :doc:`monitoring tools </administration/monitoring>` and by
79
+ specifying appropriate :doc:`write concern
80
+ </reference/write-concern>`.
82
81
83
- - Ensure that your secondaries remain up-to-date by using
84
- :doc:`monitoring tools </administration/monitoring>` and by
85
- specifying appropriate :doc:`write concern
86
- </reference/write-concern> `.
82
+ - Do not use secondary reads to scale overall read throughput. See:
83
+ `Can I use more replica nodes to scale`_ for an overview of read
84
+ scaling. For information about secondary reads, see:
85
+ :doc:`/core/read-preference `.
87
86
88
- - Do not use secondary reads to scale overall read throughput. See:
89
- `Can I use more replica nodes to scale`_ for an overview of read
90
- scaling. For information about secondary reads, see:
91
- :doc:`/core/read-preference`.
92
-
93
- .. _Can I use more replica nodes to scale: http://askasya.com/post/canreplicashelpscaling
87
+ .. _Can I use more replica nodes to scale: http://askasya.com/post/canreplicashelpscaling
94
88
95
89
Sharding
96
90
~~~~~~~~
97
91
98
- .. cssclass:: checklist
99
-
100
- - Ensure that your shard key distributes the load evenly on your shards.
101
- See: :doc:`/core/sharding-shard-key` for more information.
102
-
103
- - Use :ref:`targeted operations <sharding-mongos-targeted>`
104
- for workloads that need to scale with the number of shards.
105
-
106
- - **For MongoDB 3.4 and earlier**, read from the primary nodes for
107
- :ref:`non-targeted or broadcast <sharding-mongos-broadcast>`
108
- queries as these queries may be sensitive to `stale or orphaned
109
- data
110
- <http://blog.mongodb.org/post/74730554385/background-indexing-on-se condaries-and-orphaned>`_.
111
-
112
- - | **For MongoDB 3.6 and later**, secondaries no longer return orphaned
113
- data unless using read concern :readconcern:`"available"` (which
114
- is the default read concern for reads against secondaries when not
115
- associated with :ref:`causally consistent sessions <sessions>`).
116
-
117
- | Starting in MongoDB 3.6, all members of the shard replica set
118
- maintain chunk metadata, allowing them to filter out orphans
119
- when not using :readconcern:`"available"`. As such,
120
- :ref:`non-targeted or broadcast <sharding-mongos-broadcast>`
121
- queries that are not using :readconcern:`"available"` can be
122
- safely run on any member and will not return orphaned data.
123
-
124
- | The :readconcern:`"available"` read concern can return
125
- :term:`orphaned documents <orphaned document>` from secondary
126
- members since it does not check for updated chunk metadata.
127
- However, if the return of orphaned documents is immaterial to an
128
- application, the :readconcern:`"available"` read concern provides
129
- the lowest latency reads possible among the various read concerns.
130
-
131
- - :doc:`Pre-split and manually balance chunks
132
- </tutorial/create-chunks-in-sharded-cluster>` when inserting large
133
- data sets into a new non-hashed sharded collection. Pre-splitting
134
- and manually balancing enables the insert load to be distributed
135
- among the shards, increasing performance for the initial load.
92
+ - Ensure that your shard key distributes the load evenly on your shards.
93
+ See: :doc:`/core/sharding-shard-key` for more information.
94
+
95
+ - Use :ref:`targeted operations <sharding-mongos-targeted>`
96
+ for workloads that need to scale with the number of shards.
97
+
98
+ - **For MongoDB 3.4 and earlier**, read from the primary nodes for
99
+ :ref:`non-targeted or broadcast <sharding-mongos-broadcast>`
100
+ queries as these queries may be sensitive to `stale or orphaned
101
+ data
102
+ <http://blog.mongodb.org/post/74730554385/background-indexing-on-se condaries-and-orphaned>`_.
103
+
104
+ - | **For MongoDB 3.6 and later**, secondaries no longer return orphaned
105
+ data unless using read concern :readconcern:`"available"` (which
106
+ is the default read concern for reads against secondaries when not
107
+ associated with :ref:`causally consistent sessions <sessions>`).
108
+
109
+ | Starting in MongoDB 3.6, all members of the shard replica set
110
+ maintain chunk metadata, allowing them to filter out orphans
111
+ when not using :readconcern:`"available"`. As such,
112
+ :ref:`non-targeted or broadcast <sharding-mongos-broadcast>`
113
+ queries that are not using :readconcern:`"available"` can be
114
+ safely run on any member and will not return orphaned data.
115
+
116
+ | The :readconcern:`"available"` read concern can return
117
+ :term:`orphaned documents <orphaned document>` from secondary
118
+ members since it does not check for updated chunk metadata.
119
+ However, if the return of orphaned documents is immaterial to an
120
+ application, the :readconcern:`"available"` read concern provides
121
+ the lowest latency reads possible among the various read concerns.
122
+
123
+ - :doc:`Pre-split and manually balance chunks
124
+ </tutorial/create-chunks-in-sharded-cluster>` when inserting large
125
+ data sets into a new non-hashed sharded collection. Pre-splitting
126
+ and manually balancing enables the insert load to be distributed
127
+ among the shards, increasing performance for the initial load.
136
128
137
129
Drivers
138
130
~~~~~~~
139
131
140
- .. cssclass:: checklist
141
-
142
- - Make use of connection pooling. Most MongoDB drivers support
143
- connection pooling. Adjust the connection pool size to suit your
144
- use case, beginning at 110-115% of the typical number of concurrent
145
- database requests.
132
+ - Make use of connection pooling. Most MongoDB drivers support
133
+ connection pooling. Adjust the connection pool size to suit your
134
+ use case, beginning at 110-115% of the typical number of concurrent
135
+ database requests.
146
136
147
- - Ensure that your applications handle transient write and read errors
148
- during replica set elections.
137
+ - Ensure that your applications handle transient write and read errors
138
+ during replica set elections.
149
139
150
- - Ensure that your applications handle failed requests and retry them if
151
- applicable. Drivers **do not** automatically retry failed requests.
140
+ - Ensure that your applications handle failed requests and retry them if
141
+ applicable. Drivers **do not** automatically retry failed requests.
152
142
153
- - Use exponential backoff logic for database request retries.
143
+ - Use exponential backoff logic for database request retries.
154
144
155
- - Use :method:`cursor.maxTimeMS()` for reads and :ref:`wc-wtimeout` for
156
- writes if you need to cap execution time for database operations.
145
+ - Use :method:`cursor.maxTimeMS()` for reads and :ref:`wc-wtimeout` for
146
+ writes if you need to cap execution time for database operations.
0 commit comments