Skip to content

Commit e7a527a

Browse files
committed
DOCS-5211: Split out Analyzing MongoDB performance article
1 parent 9ace9cf commit e7a527a

File tree

9 files changed

+297
-471
lines changed

9 files changed

+297
-471
lines changed

config/redirects.yaml

Lines changed: 9 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -211,8 +211,15 @@ code: 301
211211
outputs:
212212
- 'before-v2.6'
213213
---
214+
from: '/administration/analyzing-mongodb-performance'
215+
to: '/administration/optimization'
216+
type: 'redirect'
217+
code: 301
218+
outputs:
219+
- 'before-v2.6'
220+
---
214221
# redirected in 3.0 to getting started
215-
# temp -- we should fix giza so that
222+
# temp -- we should fix giza so that
216223
# we can use the external field to redirect with
217224
# the after-xxx outputs
218225
from: '/tutorial/getting-started'
@@ -225,7 +232,7 @@ outputs:
225232
- { 'v3.0': "http://docs.mongodb.org/getting-started" }
226233
---
227234
# redirected in 3.0 to getting started
228-
# temp -- we should fix giza so that
235+
# temp -- we should fix giza so that
229236
# we can use the external field to redirect with
230237
# the after-xxx outputs
231238
from: '/tutorial/generate-test-data'
Lines changed: 249 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,249 @@
1+
=============================
2+
Analyzing MongoDB Performance
3+
=============================
4+
5+
.. default-domain:: mongodb
6+
7+
As you develop and operate applications with MongoDB, you may need to
8+
analyze the performance of the application and its database.
9+
When you encounter degraded performance, it is often a function of database
10+
access strategies, hardware availability, and the number of open database
11+
connections.
12+
13+
Some users may experience performance limitations as a result of inadequate
14+
or inappropriate indexing strategies, or as a consequence of poor schema
15+
design patterns. :ref:`analyzing-performance-locks` discusses how these can
16+
impact MongoDB's internal locking.
17+
18+
Performance issues may indicate that the database is operating at capacity
19+
and that it is time to add additional capacity to the database. In particular,
20+
the application's :term:`working set` should fit in the available physical
21+
memory. See :ref:`analyzing-memory-mmapv1` for more information on the working
22+
set.
23+
24+
In some cases performance issues may be temporary and related to
25+
abnormal traffic load. As discussed in :ref:`number-of-connections`, scaling
26+
can help relax excessive traffic.
27+
28+
:ref:`database-profiling` can help you to understand what operations are causing
29+
degradation.
30+
31+
.. _analyzing-performance-locks:
32+
33+
Locking Performance
34+
~~~~~~~~~~~~~~~~~~~
35+
36+
MongoDB uses a locking system to ensure data set consistency. If
37+
certain operations are long-running or a queue forms, performance
38+
will degrade as requests and operations wait for the lock.
39+
40+
Lock-related slowdowns can be intermittent. To see if the lock has been
41+
affecting your performance, look to the data in the
42+
:ref:`globalLock` section of the :dbcommand:`serverStatus` output.
43+
44+
If :data:`globalLock.currentQueue.total
45+
<serverStatus.globalLock.currentQueue.total>` is consistently high,
46+
then there is a chance that a large number of requests are waiting for
47+
a lock. This indicates a possible concurrency issue that may be affecting
48+
performance.
49+
50+
If :data:`globalLock.totalTime <serverStatus.globalLock.totalTime>` is
51+
high relative to :data:`~serverStatus.uptime`, the database has
52+
existed in a lock state for a significant amount of time.
53+
54+
If :data:`globalLock.ratio <serverStatus.globalLock.ratio>` is also high,
55+
MongoDB has likely been processing a large number of long running
56+
queries.
57+
58+
Long queries can result from ineffective use of indexes;
59+
non-optimal schema design; poor query structure; system architecture issues; or
60+
insufficient RAM resulting in
61+
:ref:`page faults <administration-monitoring-page-faults>` and disk reads.
62+
63+
.. _analyzing-memory-mmapv1:
64+
65+
Memory and the MMAPv1 Storage Engine
66+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
67+
68+
Memory Use
69+
``````````
70+
71+
With the :ref:`MMAPv1 <storage-mmapv1>` storage engine, MongoDB uses
72+
memory-mapped files to store data. Given a data set of sufficient size,
73+
the :program:`mongod` process will allocate all available memory on the system
74+
for its use.
75+
76+
While this is intentional and aids performance, the memory mapped files make it
77+
difficult to determine if the amount of RAM is sufficient for the data set.
78+
79+
The :ref:`memory usage statuses <memory-status>` metrics of the
80+
:dbcommand:`serverStatus` output can provide insight into MongoDB's
81+
memory use.
82+
83+
The :data:`mem.resident <serverStatus.mem.resident>` field provides the
84+
amount of resident memory in use. If this exceeds the amount of system
85+
memory *and* there is a significant amount of data on disk that isn't in RAM,
86+
you may have exceeded the capacity of your system.
87+
88+
You can inspect :data:`mem.mapped <serverStatus.mem.mapped>` to check the
89+
amount of mapped memory that :program:`mongod` is using. If this value is
90+
greater than the amount of system memory, some operations will require a
91+
:term:`page faults <page fault>` to read data from disk.
92+
93+
94+
.. _administration-monitoring-page-faults:
95+
96+
Page Faults
97+
```````````
98+
99+
.. include:: /includes/fact-page-fault.rst
100+
101+
MongoDB reports its triggered page faults as the total number of
102+
:term:`page faults <page fault>` in one second. To check for page faults, see
103+
the :data:`extra_info.page_faults <serverStatus.extra_info.page_faults>` value
104+
in the :dbcommand:`serverStatus` output.
105+
106+
Rapid increases in the MongoDB page fault counter may indicate that the server
107+
has too little physical memory. Page faults also can occur while accessing
108+
large data sets or scanning an entire collection.
109+
110+
A single page fault completes quickly and is not problematic. However, in
111+
aggregate, large volumes of page faults typically indicate that MongoDB
112+
is reading too much data from disk.
113+
114+
MongoDB can often "yield" read locks after a page fault, allowing other database
115+
processes to read while :program:`mongod` loads the next page into memory.
116+
Yielding the read lock following a page fault improves concurrency, and also
117+
improves overall throughput in high volume systems.
118+
119+
Increasing the amount of RAM accessible to MongoDB may help reduce the
120+
frequency of page faults. If this is not possible, you may want to consider
121+
deploying a :term:`sharded cluster` or adding :term:`shards <shard>`
122+
to your deployment to distribute load among :program:`mongod` instances.
123+
124+
See :ref:`faq-storage-page-faults` for more information.
125+
126+
.. _number-of-connections:
127+
128+
Number of Connections
129+
~~~~~~~~~~~~~~~~~~~~~
130+
131+
In some cases, the number of connections between the applications and the
132+
database can overwhelm the ability of the server to handle requests. The
133+
following fields in the :dbcommand:`serverStatus` document can provide insight:
134+
135+
- :data:`globalLock.activeClients
136+
<serverStatus.globalLock.activeClients>` contains a counter of the
137+
total number of clients with active operations in progress or
138+
queued.
139+
140+
- :data:`~serverStatus.connections` is a container for the following
141+
two fields:
142+
143+
- :data:`~serverStatus.connections.current` the total number of
144+
current clients that connect to the database instance.
145+
146+
- :data:`~serverStatus.connections.available` the total number of
147+
unused connections available for new clients.
148+
149+
If there are numerous concurrent application requests, the database may have
150+
trouble keeping up with demand. If this is the case, then you will need to
151+
increase the capacity of your deployment.
152+
153+
For read-heavy applications, increase the size of your :term:`replica set` and
154+
distribute read operations to :term:`secondary` members.
155+
156+
For write-heavy applications, deploy :term:`sharding` and add one or more
157+
:term:`shards <shard>` to a :term:`sharded cluster` to distribute load among
158+
:program:`mongod` instances.
159+
160+
Spikes in the number of connections can also be the result of
161+
application or driver errors. All of the officially supported MongoDB
162+
drivers implement connection pooling, which allows clients to use and
163+
reuse connections more efficiently. Extremely high numbers of
164+
connections, particularly without corresponding workload is often
165+
indicative of a driver or other configuration error.
166+
167+
Unless constrained by system-wide limits, MongoDB has no limit on
168+
incoming connections. On Unix-based systems, you can modify system limits
169+
using the ``ulimit`` command, or by editing your system's
170+
``/etc/sysctl`` file. See :doc:`/reference/ulimit` for more
171+
information.
172+
173+
.. _database-profiling:
174+
175+
Database Profiling
176+
~~~~~~~~~~~~~~~~~~
177+
178+
MongoDB's "Profiler" is a database profiling system that can help identify
179+
inefficient queries and operations.
180+
181+
The following profiling levels are available:
182+
183+
.. list-table::
184+
:header-rows: 1
185+
186+
* - **Level**
187+
188+
- **Setting**
189+
190+
* - 0
191+
192+
- Off. No profiling
193+
194+
* - 1
195+
196+
- On. Only includes *"slow"* operations
197+
198+
* - 2
199+
200+
- On. Includes *all* operations
201+
202+
Enable the profiler by setting the
203+
:dbcommand:`profile` value using the following command in the
204+
:program:`mongo` shell:
205+
206+
.. code-block:: javascript
207+
208+
db.setProfilingLevel(1)
209+
210+
The :setting:`~operationProfiling.slowOpThresholdMs` setting defines what constitutes a "slow"
211+
operation. To set the threshold above which the profiler considers
212+
operations "slow" (and thus, included in the level ``1`` profiling
213+
data), you can configure :setting:`~operationProfiling.slowOpThresholdMs` at runtime as an argument to
214+
the :method:`db.setProfilingLevel()` operation.
215+
216+
.. see:: The documentation of :method:`db.setProfilingLevel()` for more
217+
information.
218+
219+
By default, :program:`mongod` records all "slow" queries to its
220+
:setting:`log <logpath>`, as defined by :setting:`~operationProfiling.slowOpThresholdMs`.
221+
222+
.. note::
223+
224+
Because the database profiler can negatively impact
225+
performance, only enable profiling for strategic intervals and as
226+
minimally as possible on production systems.
227+
228+
You may enable profiling on a per-:program:`mongod` basis. This
229+
setting will not propagate across a :term:`replica set` or
230+
:term:`sharded cluster`.
231+
232+
You can view the output of the profiler in the ``system.profile``
233+
collection of your database by issuing the ``show profile`` command in
234+
the :program:`mongo` shell, or with the following operation:
235+
236+
.. code-block:: javascript
237+
238+
db.system.profile.find( { millis : { $gt : 100 } } )
239+
240+
This returns all operations that lasted longer than 100 milliseconds.
241+
Ensure that the value specified here (``100``, in this example) is above the
242+
:setting:`~operationProfiling.slowOpThresholdMs` threshold.
243+
244+
You must use the :operator:`$query` operator to access the ``query``
245+
field of documents within ``system.profile``.
246+
247+
.. seealso:: :doc:`/administration/optimization` addresses strategies
248+
that may improve the performance of your database queries and
249+
operations.

0 commit comments

Comments
 (0)