@@ -18,7 +18,7 @@ What is a storage engine?
18
18
~~~~~~~~~~~~~~~~~~~~~~~~~
19
19
20
20
A storage engine is the part of a database that is responsible for
21
- managing how data is stored on disk. Many databases, support multiple
21
+ managing how data is stored on disk. Many databases support multiple
22
22
storage engines, where different engines perform better for specific
23
23
workloads. For example, one storage engine might offer better
24
24
performance for read-heavy workloads, and another might support
@@ -27,18 +27,17 @@ a higher-throughput for write operations.
27
27
What will be the default storage engine going forward?
28
28
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
29
29
30
- MMAPv1 will be the default storage engine in 3.0. WiredTiger will
31
- become the default storage engine in a future version of
32
- MongoDB. You will be able to decide which storage engine is best for
33
- their application.
30
+ MMAPv1 is the default storage engine in 3.0. With multiple storage
31
+ engines, you will always be able to decide which storage engine is
32
+ best for your application.
34
33
35
34
Can you mix storage engines in a replica set?
36
35
---------------------------------------------
37
36
38
37
Yes. You can have a replica set members that use different storage
39
38
engines.
40
39
41
- When designing these milt -storage engine deployments consider the
40
+ When designing these multi -storage engine deployments consider the
42
41
following:
43
42
44
43
- the oplog on each member may need to be sized differently to account
@@ -55,7 +54,7 @@ Can I upgrade an existing deployment to a WiredTiger?
55
54
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
56
55
57
56
Yes. You can upgrade an existing deployment to WiredTiger while the
58
- deployment remains continuously available, by adding replica set
57
+ deployment remains continuously available by adding replica set
59
58
members with the new storage engine and then removing members with the
60
59
legacy storage engine. See the following sections of the
61
60
:doc:`/release-notes/3.0-upgrade` for the complete procedure that you
@@ -68,9 +67,11 @@ can use to upgrade an existing deployment:
68
67
How much compression does WiredTiger provide?
69
68
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
70
69
71
- As much as 50% to 80%. Collection data in WiredTiger use Snappy
72
- :term:`block compression` by default, and index data use :term:`prefix
73
- compression` by default.
70
+ The ratio of compressed data to uncompressed data depends on your data
71
+ and the compression library used. Collection data in WiredTiger use
72
+ Snappy :term:`block compression` by default, although ``zlib``
73
+ compression is also optionally available. Index data use
74
+ :term:`prefix compression` by default.
74
75
75
76
MMAP Storage Engine
76
77
-------------------
@@ -185,10 +186,10 @@ something like this in the log:
185
186
Thu Aug 11 13:06:19 [FileAllocator] error failed to allocate new file: dbms/test.13 size: 2146435072 errno:28 No space left on device
186
187
Thu Aug 11 13:06:19 [FileAllocator] will try again in 10 seconds
187
188
188
- The server remains in this state forever, blocking all writes including
189
- deletes. However, reads still work. To delete some data and compact,
190
- using the :dbcommand:`compact` command, you must restart the server
191
- first.
189
+ The server remains in this state forever, blocking all writes
190
+ including deletes. However, reads still work. With MMAPv1 you can
191
+ delete some data and compact, using the :dbcommand:`compact` command;
192
+ however, you must restart the server first.
192
193
193
194
If your server runs out of disk space for journal files, the server
194
195
process will exit. By default, :program:`mongod` creates journal files
@@ -202,6 +203,61 @@ filesystem mount or a symlink.
202
203
will not be able to use a file system snapshot tool to capture a
203
204
valid snapshot of your data files and journal files.
204
205
206
+ .. _faq-working-set:
207
+
208
+ What is the working set?
209
+ ~~~~~~~~~~~~~~~~~~~~~~~~
210
+
211
+ Working set represents the total body of data that the application
212
+ uses in the course of normal operation. Often this is a subset of the
213
+ total data size, but the specific size of the working set depends on
214
+ actual moment-to-moment use of the database.
215
+
216
+ If you run a query that requires MongoDB to scan every document in a
217
+ collection, the working set will expand to include every
218
+ document. Depending on physical memory size, this may cause documents
219
+ in the working set to "page out," or to be removed from physical memory by
220
+ the operating system. The next time MongoDB needs to access these
221
+ documents, MongoDB may incur a hard page fault.
222
+
223
+ For best performance, the majority of your *active* set should fit in
224
+ RAM.
225
+
226
+ .. _faq-storage-page-faults:
227
+
228
+ What are page faults?
229
+ ~~~~~~~~~~~~~~~~~~~~~
230
+
231
+ .. include:: /includes/fact-page-fault.rst
232
+
233
+ If there is free memory, then the operating system can find the page
234
+ on disk and load it to memory directly. However, if there is no free
235
+ memory, the operating system must:
236
+
237
+ - find a page in memory that is stale or no longer needed, and write
238
+ the page to disk.
239
+
240
+ - read the requested page from disk and load it into memory.
241
+
242
+ This process, on an active system, can take a long time,
243
+ particularly in comparison to reading a page that is already in
244
+ memory.
245
+
246
+ See :ref:`administration-monitoring-page-faults` for more information.
247
+
248
+ What is the difference between soft and hard page faults?
249
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
250
+
251
+ :term:`Page faults <page fault>` occur when MongoDB, with the MMAP
252
+ storage engine, needs access to data that isn't currently in active
253
+ memory. A "hard" page fault refers to situations when MongoDB must
254
+ access a disk to access the data. A "soft" page fault, by contrast,
255
+ merely moves memory pages from one list to another, such as from an
256
+ operating system file cache. In production, MongoDB will rarely
257
+ encounter soft page faults.
258
+
259
+ See :ref:`administration-monitoring-page-faults` for more information.
260
+
205
261
Data Storage Diagnostics
206
262
------------------------
207
263
@@ -249,68 +305,7 @@ collection:
249
305
What tools can I use to investigate storage use in MongoDB?
250
306
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
251
307
252
- The :method:`db.stats()` method in the :program:`mongo` shell,
308
+ The :method:`db.stats()` method in the :program:`mongo` shell
253
309
returns the current state of the "active" database. The
254
- :doc :`dbStats command </reference/command/dbStats> ` document describes
310
+ :dbcommand :`dbStats` document describes
255
311
the fields in the :method:`db.stats()` output.
256
-
257
- Page Faults
258
- -----------
259
-
260
- .. _faq-working-set:
261
-
262
- What is the working set?
263
- ~~~~~~~~~~~~~~~~~~~~~~~~
264
-
265
- Working set represents the total body of data that the application
266
- uses in the course of normal operation. Often this is a subset of the
267
- total data size, but the specific size of the working set depends on
268
- actual moment-to-moment use of the database.
269
-
270
- If you run a query that requires MongoDB to scan every document in a
271
- collection, the working set will expand to include every
272
- document. Depending on physical memory size, this may cause documents
273
- in the working set to "page out," or to be removed from physical memory by
274
- the operating system. The next time MongoDB needs to access these
275
- documents, MongoDB may incur a hard page fault.
276
-
277
- If you run a query that requires MongoDB to scan every
278
- :term:`document` in a collection, the working set includes every
279
- active document in memory.
280
-
281
- For best performance, the majority of your *active* set should fit in
282
- RAM.
283
-
284
- .. _faq-storage-page-faults:
285
-
286
- What are page faults?
287
- ~~~~~~~~~~~~~~~~~~~~~
288
-
289
- .. include:: /includes/fact-page-fault.rst
290
-
291
- If there is free memory, then the operating system can find the page
292
- on disk and load it to memory directly. However, if there is no free
293
- memory, the operating system must:
294
-
295
- - find a page in memory that is stale or no longer needed, and write
296
- the page to disk.
297
-
298
- - read the requested page from disk and load it into memory.
299
-
300
- This process, particularly on an active system can take a long time,
301
- particularly in comparison to reading a page that is already in
302
- memory.
303
-
304
- See :ref:`administration-monitoring-page-faults` for more information.
305
-
306
- What is the difference between soft and hard page faults?
307
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
308
-
309
- :term:`Page faults <page fault>` occur when MongoDB needs access to
310
- data that isn't currently in active memory. A "hard" page fault
311
- refers to situations when MongoDB must access a disk to access the
312
- data. A "soft" page fault, by contrast, merely moves memory pages from
313
- one list to another, such as from an operating system file
314
- cache. In production, MongoDB will rarely encounter soft page faults.
315
-
316
- See :ref:`administration-monitoring-page-faults` for more information.
0 commit comments