Skip to content

DOCS-814 journaling #470

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 8 commits into from
Dec 19, 2012
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
327 changes: 327 additions & 0 deletions source/administration/journaling.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,327 @@
==========
Journaling
==========

.. default-domain:: mongodb

:term:`Journaling <journal>` ensures durability of data by storing
:doc:`write operations </core/write-operations>` in an on-disk
journal prior to applying them to the data files. The journal
ensures write operations can be re-applied in the event of a crash.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

journal need not be literalized.

journaling also ensures that mongodb is crash resistent: without a journal, if mongodb exits unexpectedly, then operators must assume that the data are in an inconsistent state and should resync from a clean secondary.

If we don't make this clear, it's possible that people won't respect or value the importance of journaling.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Crash resistent or crash resilient?

What operators?

Are you saying that if journaling is enabled and the primary in a replica set crashes, that the secondaries don't need to resync from a clean secondary?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

resilient.

operators = administrations/users (this is, admittedly a somewhat arcane and use of the term, sorry for the confusion)

the longer story is:

  • without journaling, if you shutdown uncleanly (i.e. by sending kill -9 mongod, or if it encounters an error and bails out, or there's power loss) then the data is almost certainly corrupt in some way. So you either have to run repair (which just throws away invalid BSON in the database,) or you have to resync from a clean member of the set (copy the data or just use initial sync) to ensure that the data is coherent.
  • with journaling, if mongod stops, it can recover everything that it wrote to the journal (which is everything less the last 100ms at most data (by default)) and the data files will be in a consistent state after it finishes playing back the journal, without need for resync (unless, of course, the secondary has fallen off the back edge of the oplog, which is an unrelated issue that doesn't need to be documented here...

Journaling ensures that :program:`mongodb` is crash resistent. Without a
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

c/resistent/resistant

journal, if :program:`mongodb` exits unexpectedly, the operators must assume
the data are in an inconsistent state and should resync from a clean
secondary.

.. versionchanged:: 2.0

Journaling is enabled by default for 64-bit platforms.

How Journaling Works
--------------------

When running with journaling, MongoDB stores and applies :doc:`write
operations </core/write-operations>` in memory and in the journal before
the changes are in the data files.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

c/are in/are written to/


This section explains this process in detail.

.. _journaling-configuring-storage:

Storage Locations used in Journaling
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Journaling adds three storage locations to MongoDB.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is internal implementation information. We should burry it as much as possible.

And I think "locations," is the wrong metaphor "views" is probably enough.


The ``shared view`` stores modified data for upload to the MongoDB
data files. The ``shared view`` is the only location with direct access
to the MongoDB data files. When running with journaling, :program:`mongod`
asks the operating system to map your
existing on-disk data files to the ``shared view`` memory location. The
operating system maps the files but does not load them. MongoDB later
loads data files to ``shared view`` as needed.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

c/to shared view/to thisshared view``


The ``private view`` stores data for use in :doc:`read operations
</core/read-operations>`. The ``private view`` is mapped to the ``shared view``
and is the first place MongoDB applies new :doc:`write operations
</core/write-operations>`, mean read operations get the most up-to-date
data. Keep in mind that because the ``private view`` is a second mapping
of data files, journaling often doubles the amount of virtual memory
:program:`mongod` uses.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No edit to make here but this is an important point which may deserve a callout.
MongoDB with journalling will likely double the amount of VM required

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is the crux of the 32 bit problem.

I worry that calling it out might be unclear and sort of information overload because it's not operationally relevant to force people to contemplate virtual memory space limitations, when it's not a realistic problem, and one that is easily and commonly handled with other administrative solutions?)


The journal is an on-disk location that stores new write operations
after they have been applied to the ``private cache`` but before they
have been applied to the data files. The journal provides durability.
If the :program:`mongod` instance were to crash without having applied
the writes to the data files, the journal could replay the writes to
the ``shared view`` for eventual upload to the data files.

.. _journaling-record-write-operation:

How Journaling Records Write Operations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

As users perform :doc:`write operations </core/write-operations>`,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: users do not perform write operations, users make updates to the database, mongod performs the operations. Note that a delete may cause a write operation to occur, as well as non-user initiated activities. All of these would be journalled AFAIK.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in these cases I think it's fine/advisable to say "applications" rather than users.

MongoDB writes the data to the ``private view`` in RAM, making it
immediately available for :doc:`read operations
</core/read-operations>`.

MongoDB then copies the write operations in batches from the ``private
view`` to the journal, which stores the operations on disk to ensure
durability. When writing to the journal, MongoDB adds a write operation as
an entry on the journal's forward pointer. Each entry on the pointer
describes which bytes the write operation changed in the data files.
(The journal also has a behind pointer, discussed later in this
section.)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

omit or rework parenthicals


MongoDB copies the write operations to the journal in batches
called group commits. By default, MongoDB performs a group commit every
100 milliseconds, which means a series of operations over 100
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lead buried. this is probably the only thing that users actually care about.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I just copy edited @tychoish ’ note.

milliseconds are committed as a single batch. This is done to achieve
high performance.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it done to achieve "high performance" or to reduce the window of exposure to catastrophic loss?
Journalling almost by definition is not a high performance activity.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/high/improve?/


MongoDB next applies the journal's write operations to the ``shared
view``. At this point, the ``shared view`` becomes inconsistent with the data files.

At default intervals of 60 seconds, MongoDB asks the operating system to
flush the ``shared view`` to disk. This brings the data files up-to-date
with the latest write operations.

When write operations are flushed to the data files, MongoDB removes the
write operations from the journal's behind pointer. The behind pointer
is always far back from advanced pointer.

As part of journaling, MongoDB routinely asks the operating system to
remap the ``shared view`` to the ``private view``, for consistency.

.. note:: The interaction between the ``shared view`` and the on-disk
data files is not dissimilar to how MongoDB works *without*
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is simmilar

journaling, which is that MongoDB asks the operating system to flush
in-memory changes back to the data files every 60 seconds.

What Journaling Stores
~~~~~~~~~~~~~~~~~~~~~~

Journaling stores:

- documents
- indexes
- meta data on collections and databases
- journals, which are information about the information stored
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

journaling stores raw operations that allow MongoDB to reconstruct the following operations: document insertion/updates and index modifications, and changes to the namespace files: we need to find away that describes the data as the user experiences them and also what it actually reflects (it took me a long time to figure out that metadata = ns files.)

journaling stores journals?


.. _journaling-journal-files:

Journal Files
~~~~~~~~~~~~~

With journaling enabled, MongoDB creates a journal directory within
your database directory. The journal directory holds journal files,
which contain write-ahead redo logs. The directory also holds a
last-sequence-number file. A clean shutdown removes all the files in the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we describe the lsn file anywhere (not in current corpus)?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

elaborate?

journal directory.

Journal files are append-only files and are named with the ``j._``
prefix. When a journal file reaches 1 gigabyte, a new file is created.
Files that no longer are needed are automatically deleted. Unless your
write-bytes-per-second rate is extremely high, the directory should
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unless you write many bytes of data per-second, the journal directory.

contain only two or three journal files.

To limit the size of journal files to 128 megabytes per file, use the
:option:`--smallfiles` command line option when starting
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

xref won't work

:program:`mongod`.

To speed the frequent sequential writes that occur to the current
journal file, you can symbolically link the journal directory to a
dedicated hard drive before starting :program:`mongod`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would just say "ensure that the journal directory is on a different system."

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Which leads me to ask: "Can I specify where the journal files will be written?"
The answer is presently: no.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's in it's own directory so you can symlink or use the dir as a mount point for a distinct device, which is our solution for this kind of problem (i.e. directoryperdb)


In some cases, you might experience a preallocation lag the first time
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

file system/operating system dependent.

you start a :program:`mongod` instance with journaling enabled. MongoDB
may determine that it is faster to preallocate journal files than to
create them as needed. This would be the case if it is faster on your
file system to write to files of predefined sizes than to append files.
If MongoDB preallocates the files, you might experience a several
minutes delay on first startup of :program:`mongod`. You will not be
able to connect to the database until the preallocation completes. This
is a one-time preallocation and does not occur with future invocations.
Check the logs to see if MongoDB is preallocating. The logs will display
the standard "waiting for connections on port" message when complete.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

elimite complex conditionals?

potentially move an expanded and simplified version of this to the storage faq?


To avoid this lag, see :ref:`journaling-avoid-preallocation-lag`.

Configuration and Setup
-----------------------

Enable Journaling
~~~~~~~~~~~~~~~~~

Beginning with version 2.0, journaling is enabled by default for 64-bit
platforms.

To enable journaling, start :program:`mongod` with the
:option:`--journal` command line option.

If :program:`mongod` decides to preallocate the files, it will not start
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

c/it will not start/it will delay

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's also not really deciding?

listening on port 27017 until this process completes, which can take a
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

c/which can take/which may take

few minutes. This means that your applications and the shell will not be
able to connect to the database immediately on initial startup. Check
the logs to see if MongoDB is busy preallocating.

Disable Journaling
~~~~~~~~~~~~~~~~~~

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need appropriate warning.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sam, What warning is needed here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On Monday, December 17 2012, 15:57:50, Bob Grabar wrote:

+Beginning with version 2.0, journaling is enabled by default for 64-bit
+platforms.
+
+To enable journaling, start :program:mongod with the
+:option:--journal command line option.
+
+If :program:mongod decides to preallocate the files, it will not start
+listening on port 27017 until this process completes, which can take a
+few minutes. This means that your applications and the shell will not be
+able to connect to the database immediately on initial startup. Check
+the logs to see if MongoDB is busy preallocating.
+
+Disable Journaling
+~~~~~~~~~~~~~~~~~~
+

Sam, What warning is needed here?

"Do not disable journaling on production systems. If your MongoDB system
stops unexpectedly, as the result of a system error, power failure, or
other condition and you are not running with journaling; you must
recover from backups or re-sync from an unaffected replica set member."
(Link: recovering from unexpected shutdown.)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

To disable journaling, start :program:`mongod` with the
:option:`--nojournal <mongod --nojournal>` command line option.

It is OK to disable journaling after running with journaling. Simply
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK = not sufficently formal

shut down :program:`mongod` cleanly and restart with
:option:`--nojournal <mongod --nojournal>`.

Get Commit Acknowledgement
~~~~~~~~~~~~~~~~~~~~~~~~~~

You can wait for group commit acknowledgement with the getLastError
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

your application,
as part of write concern.

link to write concern section

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

terminology q: "Get Commit Acknowledgement" vs "group commit acknowledgement"?

Command. In versions before 1.9.0 using getLastError + fsync would do
this, in newer versions the "j" option has been specifically created for
this purpose.

In version 1.9.2+ the group commit delay is shortened when a commit
acknowledgement (getLastError + j) is pending; this can be as little as
1/3 of the normal group commit interval.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

references to 1.9 should be references to 2.0, but even those may not be neccessary.

use write concern language.


.. _journaling-avoid-preallocation-lag:

Avoid Preallocation Lag
~~~~~~~~~~~~~~~~~~~~~~~

To avoid preallocation lag, you can preallocate files in the journal
directory by copying them from another instance of :program:`mongod`.
(For details on preallocation lag, see :ref:`journaling-journal-files`.)

.. example:: The following sequence of commands preallocates journal
files for an instance of :program:`mongod` running on port ``27017``
with a database path of ``/data/db``.

.. code-block:: sh

$ mkdir ~/tmpDbpath
$ mongod --port 10000 --dbpath ~/tmpDbpath --journal
# startup messages
# .
# .
# .
# wait for prealloc to finish
Thu Mar 17 10:02:52 [initandlisten] preallocating a journal file
~/tmpDbpath/journal/prealloc.0
Thu Mar 17 10:03:03 [initandlisten] preallocating a journal file
~/tmpDbpath/journal/prealloc.1
Thu Mar 17 10:03:14 [initandlisten] preallocating a journal file
~/tmpDbpath/journal/prealloc.2
Thu Mar 17 10:03:25 [initandlisten] flushing directory
~/tmpDbpath/journal
Thu Mar 17 10:03:25 [initandlisten] flushing directory
~/tmpDbpath/journal
Thu Mar 17 10:03:25 [initandlisten] waiting for connections on port
10000
Thu Mar 17 10:03:25 [websvr] web admin interface listening on port 11000
# then Ctrl-C to kill this instance
^C
$ mv ~/tmpDbpath/journal /data/db/
$ # restart mongod on port 27017 with --journal
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should break this example in to sections and annotate what's happening?

issue the following commands to create a directory start mongod.

foo

Wait until you see a the following content in the log, and then use C-c to stop this instance:

bar

Move the directory:

baz

Celebrate!


Preallocated files do not contain data. It is safe to remove them. But
if you restart :program:`mongod` with journaling, :program:`mongod` will
create them again.

Change the Group Commit Interval
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Beginning with version 1.9.2, you can set the group commit interval
using the :option:`--journalCommitInterval <mongod
--journalCommitInterval>` command line option. The allowed range is ``2`` to
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

line breaks in between prgram (i.e. mongod) and option (i.e. --journalCommitInterval)

explain tradeoffs?

``300`` milliseconds.

Monitor journal Status
~~~~~~~~~~~~~~~~~~~~~~

serverStatus command

The serverStatus command now includes some statistics regarding
journaling.

journalLatencyTest Command

You can use the journalLatencyTest command to measure how long it takes
on your volume to write to the disk (including fsyncing the data) in an
append-only fashion.

> use admin

> db.runCommand("journalLatencyTest")

You can run this command on an idle system to get a baseline sync time
for journaling. In addition, it is safe to run this command on a busy
system to see the sync time on a busy system (which may be higher if the
journal directory is on the same volume as the data files).

In version 1.9.2+ you can set the group commit interval, using
--journalCommitInterval command-line option, to between 2 and 300
milliseconds (default is 100ms). The actual interval are the maximum
of this setting and your disk latency as measured above.

journalLatencyTest is also a good way to check if your disk drive is
buffering writes in its local cache. If the number is very low (e.g.,
less than 2ms) and the drive is non-ssd, the drive is probably buffering
writes. In that case, you will want to enable cache write-through for
the device in your operating system. (Unless you have a disk controller
card with battery backed ram, then this is a good thing.)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cross referencing and general cleanup,


Command-line Options
--------------------

- `--master`: The :term:`master` mode.

- :option:`--oplogSize`: This takes an argument and specifies the size
limit in MB for the oplog.

- `--slave`: The :term:`slave` mode.

- `--source`: This takes an argument and specifies the master as
<server:port>.

- `--only`: This takes an argument and specifies a single database to
replicate.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove this section? this is about master/slave?


Recovery
--------

On a restart after a crash, journal files in journal are replayed
before the server goes online. This is indicated in the log output.
You do not need to run a repair.

With journaling if you want a dataset to reside entirely in RAM, you
need twice as much RAM available as the dataset size to be able to store
the ``shared view`` and ``private view``.

Recommendations
~~~~~~~~~~~~~~~

Recommend to set (or at least check) for a low read ahead value
for the data disks, say 40 blocks.

– And 0 for non-spinning disks
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

omit.


Recommend to use a separate disk for the journal entries, with a
slightly higher read ahead, say 100 blocks
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

omit read ahead.


– Writes are always at the end of the journal

– Deletes are always at the beginning of the journal

Include checking the read ahead values in onboarding interviews

Set the read ahead values in the templates we distribute

Be aware of the issue for sudden performance breakdown tickets

– Beware of resident memory estimates when diagnosing RAM usage
61 changes: 61 additions & 0 deletions source/faq/journaling.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
===============
FAQ: Journaling
===============

.. default-domain:: mongodb

This document addresses common questions regarding MongoDB journaling.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

link to journlaing document


If you don't find the answer you're looking for, check
the :doc:`complete list of FAQs </faq>` or post your question to the
`MongoDB User Mailing List <https://groups.google.com/forum/?fromgroups#!forum/mongodb-user>`_.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we call it the user group?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And the canonical link is surprisingly https://groups.google.com/group/mongodb-user.

@tychoish can we make the link a macro somewhere?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes. but it's hacky:

the solution is to add something to the rst epiolgue variable (which is a string in the conf.) make a jira?


.. contents:: Frequently Asked Questions:
:backlinks: none
:local:

If I am using replication, can some members use journaling and others not?
--------------------------------------------------------------------------
Yes. It is OK to use journaling on some replica set members and not others.

Can I use the journaling feature to perform safe hot backups?
-------------------------------------------------------------

Yes, see Backups with Journaling Enabled.

32 bit nuances?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't a question frequent or otherwise?

---------------

There is extra memory mapped file activity with journaling. This will
further constrain the limited db size of 32 bit builds. Thus, for now
journaling by default is disabled on 32 bit systems.

When did the --journal option change from --dur?
------------------------------------------------

In 1.8 the option was renamed to --journal, but the old name is still
accepted for backwards compatibility; please change to --journal if you
are using the old option.

Will the journal replay have problems if entries are incomplete (like the failure happened in the middle of one)?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

abhor parenthesis

-----------------------------------------------------------------------------------------------------------------

Each journal (group) write is consistent and won't be replayed during
recovery unless it is complete.

How many times is data written to disk when replication and journaling are both on?
-----------------------------------------------------------------------------------

In v1.8, for an insert, four times. The object is written to the main
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we need to restate the question in the answer to make it a bit more clear?

collection, and also the oplog collection (so that is twice). Both of
those writes are journaled as a single mini-transaction in the journal
file (the files in /data/db/journal). Thus 4 times total.

There is an open item in to reduce this by having the journal be
compressed. This will reduce from 4x to probably ~2.5x.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kill aspirational documentation

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(which is to say, don't document things that don't exist yet.)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there's a funny story about the term "aspirational documentation."


The above applies to collection data and inserts which is the worst case
scenario. Index updates are written to the index and the journal, but
not the oplog, so they should be 2X today not 4X. Likewise updates with
things like $set, $addToSet, $inc, etc. are compactly logged all around
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cross reference?

so those are generally small.