@@ -20,14 +20,12 @@ deployment.
20
20
Data Durability
21
21
~~~~~~~~~~~~~~~
22
22
23
- .. cssclass:: checklist
23
+ - Ensure that your replica set includes at least three data-bearing voting
24
+ members and that your write operations use ``w: majority`` :doc:`write
25
+ concern </reference/write-concern>`. Three data-bearing voting members are
26
+ required for replica-set wide data durability.
24
27
25
- - Ensure that your replica set includes at least three data-bearing voting
26
- members and that your write operations use ``w: majority`` :doc:`write
27
- concern </reference/write-concern>`. Three data-bearing voting members are
28
- required for replica-set wide data durability.
29
-
30
- - Ensure that all instances use :doc:`journaling </core/journaling>`.
28
+ - Ensure that all instances use :doc:`journaling </core/journaling>`.
31
29
32
30
Schema Design
33
31
~~~~~~~~~~~~~
@@ -38,117 +36,109 @@ facilitates iterative development and polymorphism. Nevertheless,
38
36
collections often hold documents with highly homogeneous
39
37
structures. See :doc:`/core/data-models` for more information.
40
38
41
- .. cssclass:: checklist
42
-
43
- - Determine the set of collections that you will need and the
44
- indexes required to support your queries. With the exception of
45
- the ``_id`` index, you must create all indexes explicitly: MongoDB
46
- does not automatically create any indexes other than ``_id``.
39
+ - Determine the set of collections that you will need and the
40
+ indexes required to support your queries. With the exception of
41
+ the ``_id`` index, you must create all indexes explicitly: MongoDB
42
+ does not automatically create any indexes other than ``_id``.
47
43
48
- - Ensure that your schema design supports your deployment type: if
49
- you are planning to use :term:`sharded clusters <sharded cluster>`
50
- for horizontal scaling, design your schema to include a strong
51
- shard key. While you can :ref:`change your shard key
52
- <change-a-shard-key>` later, it is important to carefully consider
53
- your :ref:`shard key choice <sharding-shard-key-requirements>` to
54
- avoid scalability and perfomance issues.
44
+ - Ensure that your schema design supports your deployment type: if
45
+ you are planning to use :term:`sharded clusters <sharded cluster>`
46
+ for horizontal scaling, design your schema to include a strong
47
+ shard key. While you can :ref:`change your shard key
48
+ <change-a-shard-key>` later, it is important to carefully consider
49
+ your :ref:`shard key choice <sharding-shard-key-requirements>` to
50
+ avoid scalability and perfomance issues.
55
51
56
- - Ensure that your schema design does not rely on indexed arrays that
57
- grow in length without bound. Typically, best performance can
58
- be achieved when such indexed arrays have fewer than 1000 elements.
52
+ - Ensure that your schema design does not rely on indexed arrays that
53
+ grow in length without bound. Typically, best performance can
54
+ be achieved when such indexed arrays have fewer than 1000 elements.
59
55
60
- - Consider the document size limits when designing your schema.
61
- The :limit:`BSON Document Size` limit is 16MB per document. If
62
- you require larger documents, use :doc:`GridFS </core/gridfs>`.
56
+ - Consider the document size limits when designing your schema.
57
+ The :limit:`BSON Document Size` limit is 16MB per document. If
58
+ you require larger documents, use :doc:`GridFS </core/gridfs>`.
63
59
64
60
Replication
65
61
~~~~~~~~~~~
66
62
67
- .. cssclass:: checklist
63
+ - Use an odd number of voting members to ensure that elections
64
+ proceed successfully. You can have up to 7 voting members. If you
65
+ have an *even* number of voting members, and constraints, such as
66
+ cost, prohibit adding another secondary to be a voting member, you
67
+ can add an :term:`arbiter` to ensure an odd number of votes. For
68
+ additional considerations when using an arbiter for a 3-member
69
+ replica set (P-S-A), see :doc:`/core/replica-set-arbiter`.
68
70
69
- - Use an odd number of voting members to ensure that elections
70
- proceed successfully. You can have up to 7 voting members. If you
71
- have an *even* number of voting members, and constraints, such as
72
- cost, prohibit adding another secondary to be a voting member, you
73
- can add an :term:`arbiter` to ensure an odd number of votes. For
74
- additional considerations when using an arbiter for a 3-member
75
- replica set (P-S-A), see :doc:`/core/replica-set-arbiter`.
71
+ .. note::
76
72
77
- .. note::
73
+ .. include:: /includes/extracts/arbiters-and-pvs-with-reference.rst
78
74
79
- .. include:: /includes/extracts/arbiters-and-pvs-with-reference.rst
75
+ - Ensure that your secondaries remain up-to-date by using
76
+ :doc:`monitoring tools </administration/monitoring>` and by
77
+ specifying appropriate :doc:`write concern
78
+ </reference/write-concern>`.
80
79
81
- - Ensure that your secondaries remain up-to-date by using
82
- :doc:`monitoring tools </administration/monitoring>` and by
83
- specifying appropriate :doc:`write concern
84
- </reference/write-concern> `.
80
+ - Do not use secondary reads to scale overall read throughput. See:
81
+ `Can I use more replica nodes to scale`_ for an overview of read
82
+ scaling. For information about secondary reads, see:
83
+ :doc:`/core/read-preference `.
85
84
86
- - Do not use secondary reads to scale overall read throughput. See:
87
- `Can I use more replica nodes to scale`_ for an overview of read
88
- scaling. For information about secondary reads, see:
89
- :doc:`/core/read-preference`.
90
-
91
- .. _Can I use more replica nodes to scale: http://askasya.com/post/canreplicashelpscaling
85
+ .. _Can I use more replica nodes to scale: http://askasya.com/post/canreplicashelpscaling
92
86
93
87
Sharding
94
88
~~~~~~~~
95
89
96
- .. cssclass:: checklist
97
-
98
- - Ensure that your shard key distributes the load evenly on your shards.
99
- See: :doc:`/core/sharding-shard-key` for more information.
100
-
101
- - Use :ref:`targeted operations <sharding-mongos-targeted>`
102
- for workloads that need to scale with the number of shards.
103
-
104
- - **For MongoDB 3.4 and earlier**, read from the primary nodes for
105
- :ref:`non-targeted or broadcast <sharding-mongos-broadcast>`
106
- queries as these queries may be sensitive to `stale or orphaned
107
- data
108
- <http://blog.mongodb.org/post/74730554385/background-indexing-on-se condaries-and-orphaned>`_.
109
-
110
- - | **For MongoDB 3.6 and later**, secondaries no longer return orphaned
111
- data unless using read concern :readconcern:`"available"` (which
112
- is the default read concern for reads against secondaries when not
113
- associated with :ref:`causally consistent sessions <sessions>`).
114
-
115
- | Starting in MongoDB 3.6, all members of the shard replica set
116
- maintain chunk metadata, allowing them to filter out orphans
117
- when not using :readconcern:`"available"`. As such,
118
- :ref:`non-targeted or broadcast <sharding-mongos-broadcast>`
119
- queries that are not using :readconcern:`"available"` can be
120
- safely run on any member and will not return orphaned data.
121
-
122
- | The :readconcern:`"available"` read concern can return
123
- :term:`orphaned documents <orphaned document>` from secondary
124
- members since it does not check for updated chunk metadata.
125
- However, if the return of orphaned documents is immaterial to an
126
- application, the :readconcern:`"available"` read concern provides
127
- the lowest latency reads possible among the various read concerns.
128
-
129
- - :doc:`Pre-split and manually balance chunks
130
- </tutorial/create-chunks-in-sharded-cluster>` when inserting large
131
- data sets into a new non-hashed sharded collection. Pre-splitting
132
- and manually balancing enables the insert load to be distributed
133
- among the shards, increasing performance for the initial load.
90
+ - Ensure that your shard key distributes the load evenly on your shards.
91
+ See: :doc:`/core/sharding-shard-key` for more information.
92
+
93
+ - Use :ref:`targeted operations <sharding-mongos-targeted>`
94
+ for workloads that need to scale with the number of shards.
95
+
96
+ - **For MongoDB 3.4 and earlier**, read from the primary nodes for
97
+ :ref:`non-targeted or broadcast <sharding-mongos-broadcast>`
98
+ queries as these queries may be sensitive to `stale or orphaned
99
+ data
100
+ <http://blog.mongodb.org/post/74730554385/background-indexing-on-se condaries-and-orphaned>`_.
101
+
102
+ - | **For MongoDB 3.6 and later**, secondaries no longer return orphaned
103
+ data unless using read concern :readconcern:`"available"` (which
104
+ is the default read concern for reads against secondaries when not
105
+ associated with :ref:`causally consistent sessions <sessions>`).
106
+
107
+ | Starting in MongoDB 3.6, all members of the shard replica set
108
+ maintain chunk metadata, allowing them to filter out orphans
109
+ when not using :readconcern:`"available"`. As such,
110
+ :ref:`non-targeted or broadcast <sharding-mongos-broadcast>`
111
+ queries that are not using :readconcern:`"available"` can be
112
+ safely run on any member and will not return orphaned data.
113
+
114
+ | The :readconcern:`"available"` read concern can return
115
+ :term:`orphaned documents <orphaned document>` from secondary
116
+ members since it does not check for updated chunk metadata.
117
+ However, if the return of orphaned documents is immaterial to an
118
+ application, the :readconcern:`"available"` read concern provides
119
+ the lowest latency reads possible among the various read concerns.
120
+
121
+ - :doc:`Pre-split and manually balance chunks
122
+ </tutorial/create-chunks-in-sharded-cluster>` when inserting large
123
+ data sets into a new non-hashed sharded collection. Pre-splitting
124
+ and manually balancing enables the insert load to be distributed
125
+ among the shards, increasing performance for the initial load.
134
126
135
127
Drivers
136
128
~~~~~~~
137
129
138
- .. cssclass:: checklist
139
-
140
- - Make use of connection pooling. Most MongoDB drivers support
141
- connection pooling. Adjust the connection pool size to suit your
142
- use case, beginning at 110-115% of the typical number of concurrent
143
- database requests.
130
+ - Make use of connection pooling. Most MongoDB drivers support
131
+ connection pooling. Adjust the connection pool size to suit your
132
+ use case, beginning at 110-115% of the typical number of concurrent
133
+ database requests.
144
134
145
- - Ensure that your applications handle transient write and read errors
146
- during replica set elections.
135
+ - Ensure that your applications handle transient write and read errors
136
+ during replica set elections.
147
137
148
- - Ensure that your applications handle failed requests and retry them if
149
- applicable. Drivers **do not** automatically retry failed requests.
138
+ - Ensure that your applications handle failed requests and retry them if
139
+ applicable. Drivers **do not** automatically retry failed requests.
150
140
151
- - Use exponential backoff logic for database request retries.
141
+ - Use exponential backoff logic for database request retries.
152
142
153
- - Use :method:`cursor.maxTimeMS()` for reads and :ref:`wc-wtimeout` for
154
- writes if you need to cap execution time for database operations.
143
+ - Use :method:`cursor.maxTimeMS()` for reads and :ref:`wc-wtimeout` for
144
+ writes if you need to cap execution time for database operations.
0 commit comments