Skip to content

Replace KB unit with KiB #4293

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Nov 8, 2017
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Doc/c-api/memory.rst
Original file line number Diff line number Diff line change
Expand Up @@ -450,7 +450,7 @@ The pymalloc allocator

Python has a *pymalloc* allocator optimized for small objects (smaller or equal
to 512 bytes) with a short lifetime. It uses memory mappings called "arenas"
with a fixed size of 256 KB. It falls back to :c:func:`PyMem_RawMalloc` and
with a fixed size of 256 KiB. It falls back to :c:func:`PyMem_RawMalloc` and
:c:func:`PyMem_RawRealloc` for allocations larger than 512 bytes.

*pymalloc* is the default allocator of the :c:data:`PYMEM_DOMAIN_MEM` (ex:
Expand Down
2 changes: 1 addition & 1 deletion Doc/library/hashlib.rst
Original file line number Diff line number Diff line change
Expand Up @@ -267,7 +267,7 @@ include a `salt <https://en.wikipedia.org/wiki/Salt_%28cryptography%29>`_.
should be about 16 or more bytes from a proper source, e.g. :func:`os.urandom`.

*n* is the CPU/Memory cost factor, *r* the block size, *p* parallelization
factor and *maxmem* limits memory (OpenSSL 1.1.0 defaults to 32 MB).
factor and *maxmem* limits memory (OpenSSL 1.1.0 defaults to 32 MiB).
*dklen* is the length of the derived key.

Availability: OpenSSL 1.1+
Expand Down
2 changes: 1 addition & 1 deletion Doc/library/locale.rst
Original file line number Diff line number Diff line change
Expand Up @@ -373,7 +373,7 @@ The :mod:`locale` module defines the following exception and functions:

Please note that this function works like :meth:`format_string` but will
only work for exactly one ``%char`` specifier. For example, ``'%f'`` and
``'%.0f'`` are both valid specifiers, but ``'%f kB'`` is not.
``'%.0f'`` are both valid specifiers, but ``'%f KiB'`` is not.

For whole format strings, use :func:`format_string`.

Expand Down
4 changes: 2 additions & 2 deletions Doc/library/multiprocessing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1034,7 +1034,7 @@ Connection objects are usually created using :func:`Pipe` -- see also
Send an object to the other end of the connection which should be read
using :meth:`recv`.

The object must be picklable. Very large pickles (approximately 32 MB+,
The object must be picklable. Very large pickles (approximately 32 MiB+,
though it depends on the OS) may raise a :exc:`ValueError` exception.

.. method:: recv()
Expand Down Expand Up @@ -1071,7 +1071,7 @@ Connection objects are usually created using :func:`Pipe` -- see also

If *offset* is given then data is read from that position in *buffer*. If
*size* is given then that many bytes will be read from buffer. Very large
buffers (approximately 32 MB+, though it depends on the OS) may raise a
buffers (approximately 32 MiB+, though it depends on the OS) may raise a
:exc:`ValueError` exception

.. method:: recv_bytes([maxlength])
Expand Down
18 changes: 9 additions & 9 deletions Doc/tools/templates/download.html
Original file line number Diff line number Diff line change
Expand Up @@ -18,23 +18,23 @@ <h1>Download Python {{ release }} Documentation</h1>
<table class="docutils">
<tr><th>Format</th><th>Packed as .zip</th><th>Packed as .tar.bz2</th></tr>
<tr><td>PDF (US-Letter paper size)</td>
<td><a href="{{ dlbase }}/python-{{ release }}-docs-pdf-letter.zip">Download</a> (ca. 13 MB)</td>
<td><a href="{{ dlbase }}/python-{{ release }}-docs-pdf-letter.tar.bz2">Download</a> (ca. 13 MB)</td>
<td><a href="{{ dlbase }}/python-{{ release }}-docs-pdf-letter.zip">Download</a> (ca. 13 MiB)</td>
<td><a href="{{ dlbase }}/python-{{ release }}-docs-pdf-letter.tar.bz2">Download</a> (ca. 13 MiB)</td>
</tr>
<tr><td>PDF (A4 paper size)</td>
<td><a href="{{ dlbase }}/python-{{ release }}-docs-pdf-a4.zip">Download</a> (ca. 13 MB)</td>
<td><a href="{{ dlbase }}/python-{{ release }}-docs-pdf-a4.tar.bz2">Download</a> (ca. 13 MB)</td>
<td><a href="{{ dlbase }}/python-{{ release }}-docs-pdf-a4.zip">Download</a> (ca. 13 MiB)</td>
<td><a href="{{ dlbase }}/python-{{ release }}-docs-pdf-a4.tar.bz2">Download</a> (ca. 13 MiB)</td>
</tr>
<tr><td>HTML</td>
<td><a href="{{ dlbase }}/python-{{ release }}-docs-html.zip">Download</a> (ca. 9 MB)</td>
<td><a href="{{ dlbase }}/python-{{ release }}-docs-html.tar.bz2">Download</a> (ca. 6 MB)</td>
<td><a href="{{ dlbase }}/python-{{ release }}-docs-html.zip">Download</a> (ca. 9 MiB)</td>
<td><a href="{{ dlbase }}/python-{{ release }}-docs-html.tar.bz2">Download</a> (ca. 6 MiB)</td>
</tr>
<tr><td>Plain Text</td>
<td><a href="{{ dlbase }}/python-{{ release }}-docs-text.zip">Download</a> (ca. 3 MB)</td>
<td><a href="{{ dlbase }}/python-{{ release }}-docs-text.tar.bz2">Download</a> (ca. 2 MB)</td>
<td><a href="{{ dlbase }}/python-{{ release }}-docs-text.zip">Download</a> (ca. 3 MiB)</td>
<td><a href="{{ dlbase }}/python-{{ release }}-docs-text.tar.bz2">Download</a> (ca. 2 MiB)</td>
</tr>
<tr><td>EPUB</td>
<td><a href="{{ dlbase }}/python-{{ release }}-docs.epub">Download</a> (ca. 5.5 MB)</td>
<td><a href="{{ dlbase }}/python-{{ release }}-docs.epub">Download</a> (ca. 5 MiB)</td>
<td></td>
</tr>
</table>
Expand Down
6 changes: 3 additions & 3 deletions Include/internal/pymalloc.h
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,7 @@
*/
#ifdef WITH_MEMORY_LIMITS
#ifndef SMALL_MEMORY_LIMIT
#define SMALL_MEMORY_LIMIT (64 * 1024 * 1024) /* 64 MB -- more? */
#define SMALL_MEMORY_LIMIT (64 * 1024 * 1024) /* 64 MiB -- more? */
#endif
#endif

Expand All @@ -177,7 +177,7 @@
* Arenas are allocated with mmap() on systems supporting anonymous memory
* mappings to reduce heap fragmentation.
*/
#define ARENA_SIZE (256 << 10) /* 256KB */
#define ARENA_SIZE (256 << 10) /* 256 KiB */

#ifdef WITH_MEMORY_LIMITS
#define MAX_ARENAS (SMALL_MEMORY_LIMIT / ARENA_SIZE)
Expand Down Expand Up @@ -435,7 +435,7 @@ currently in use isn't on either list.
*/

/* How many arena_objects do we initially allocate?
* 16 = can allocate 16 arenas = 16 * ARENA_SIZE = 4MB before growing the
* 16 = can allocate 16 arenas = 16 * ARENA_SIZE = 4 MiB before growing the
* `arenas` vector.
*/
#define INITIAL_ARENA_OBJECTS 16
Expand Down
4 changes: 2 additions & 2 deletions Lib/distutils/cygwinccompiler.py
Original file line number Diff line number Diff line change
Expand Up @@ -234,8 +234,8 @@ def link(self, target_desc, objects, output_filename, output_dir=None,
# who wants symbols and a many times larger output file
# should explicitly switch the debug mode on
# otherwise we let dllwrap/ld strip the output file
# (On my machine: 10KB < stripped_file < ??100KB
# unstripped_file = stripped_file + XXX KB
# (On my machine: 10KiB < stripped_file < ??100KiB
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure the difference between KB and KiB is important here.

# unstripped_file = stripped_file + XXX KiB
# ( XXX=254 for a typical python extension))
if not debug:
extra_preargs.append("-s")
Expand Down
2 changes: 1 addition & 1 deletion Lib/gzip.py
Original file line number Diff line number Diff line change
Expand Up @@ -308,7 +308,7 @@ def close(self):
if self.mode == WRITE:
fileobj.write(self.compress.flush())
write32u(fileobj, self.crc)
# self.size may exceed 2GB, or even 4GB
# self.size may exceed 2 GiB, or even 4 GiB
write32u(fileobj, self.size & 0xffffffff)
elif self.mode == READ:
self._buffer.close()
Expand Down
2 changes: 1 addition & 1 deletion Lib/test/_test_multiprocessing.py
Original file line number Diff line number Diff line change
Expand Up @@ -4221,7 +4221,7 @@ def handler(signum, frame):
conn.send('ready')
x = conn.recv()
conn.send(x)
conn.send_bytes(b'x'*(1024*1024)) # sending 1 MB should block
conn.send_bytes(b'x' * (1024 * 1024)) # sending 1 MiB should block

@unittest.skipUnless(hasattr(signal, 'SIGUSR1'), 'requires SIGUSR1')
def test_ignore(self):
Expand Down
2 changes: 1 addition & 1 deletion Lib/test/libregrtest/cmdline.py
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@

largefile - It is okay to run some test that may create huge
files. These tests can take a long time and may
consume >2GB of disk space temporarily.
consume >2 GiB of disk space temporarily.

network - It is okay to run tests that use external network
resource, e.g. testing SSL support for sockets.
Expand Down
4 changes: 2 additions & 2 deletions Lib/test/pickletester.py
Original file line number Diff line number Diff line change
Expand Up @@ -2276,7 +2276,7 @@ def f():

class BigmemPickleTests(unittest.TestCase):

# Binary protocols can serialize longs of up to 2GB-1
# Binary protocols can serialize longs of up to 2 GiB-1

@bigmemtest(size=_2G, memuse=3.6, dry_run=False)
def test_huge_long_32b(self, size):
Expand All @@ -2291,7 +2291,7 @@ def test_huge_long_32b(self, size):
finally:
data = None

# Protocol 3 can serialize up to 4GB-1 as a bytes object
# Protocol 3 can serialize up to 4 GiB-1 as a bytes object
# (older protocols don't have a dedicated opcode for bytes and are
# too inefficient)

Expand Down
2 changes: 1 addition & 1 deletion Lib/test/test_bigaddrspace.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
than what the address space allows are properly met with an OverflowError
(rather than crash weirdly).

Primarily, this means 32-bit builds with at least 2 GB of available memory.
Primarily, this means 32-bit builds with at least 2 GiB of available memory.
You need to pass the -M option to regrtest (e.g. "-M 2.1G") for tests to
be enabled.
"""
Expand Down
2 changes: 1 addition & 1 deletion Lib/test/test_bz2.py
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ class BaseTest(unittest.TestCase):
BAD_DATA = b'this is not a valid bzip2 file'

# Some tests need more than one block of uncompressed data. Since one block
# is at least 100 kB, we gather some data dynamically and compress it.
# is at least 100,000 bytes, we gather some data dynamically and compress it.
# Note that this assumes that compression works correctly, so we cannot
# simply use the bigger test data for all tests.
test_size = 0
Expand Down
10 changes: 5 additions & 5 deletions Lib/test/test_io.py
Original file line number Diff line number Diff line change
Expand Up @@ -564,7 +564,7 @@ def test_raw_bytes_io(self):

def test_large_file_ops(self):
# On Windows and Mac OSX this test comsumes large resources; It takes
# a long time to build the >2GB file and takes >2GB of disk space
# a long time to build the >2 GiB file and takes >2 GiB of disk space
# therefore the resource must be enabled to run this test.
if sys.platform[:3] == 'win' or sys.platform == 'darwin':
support.requires(
Expand Down Expand Up @@ -736,7 +736,7 @@ def test_unbounded_file(self):
if sys.maxsize > 0x7FFFFFFF:
self.skipTest("test can only run in a 32-bit address space")
if support.real_max_memuse < support._2G:
self.skipTest("test requires at least 2GB of memory")
self.skipTest("test requires at least 2 GiB of memory")
with self.open(zero, "rb", buffering=0) as f:
self.assertRaises(OverflowError, f.read)
with self.open(zero, "rb") as f:
Expand Down Expand Up @@ -1421,7 +1421,7 @@ class CBufferedReaderTest(BufferedReaderTest, SizeofTest):
def test_constructor(self):
BufferedReaderTest.test_constructor(self)
# The allocation can succeed on 32-bit builds, e.g. with more
# than 2GB RAM and a 64-bit kernel.
# than 2 GiB RAM and a 64-bit kernel.
if sys.maxsize > 0x7FFFFFFF:
rawio = self.MockRawIO()
bufio = self.tp(rawio)
Expand Down Expand Up @@ -1733,7 +1733,7 @@ class CBufferedWriterTest(BufferedWriterTest, SizeofTest):
def test_constructor(self):
BufferedWriterTest.test_constructor(self)
# The allocation can succeed on 32-bit builds, e.g. with more
# than 2GB RAM and a 64-bit kernel.
# than 2 GiB RAM and a 64-bit kernel.
if sys.maxsize > 0x7FFFFFFF:
rawio = self.MockRawIO()
bufio = self.tp(rawio)
Expand Down Expand Up @@ -2206,7 +2206,7 @@ class CBufferedRandomTest(BufferedRandomTest, SizeofTest):
def test_constructor(self):
BufferedRandomTest.test_constructor(self)
# The allocation can succeed on 32-bit builds, e.g. with more
# than 2GB RAM and a 64-bit kernel.
# than 2 GiB RAM and a 64-bit kernel.
if sys.maxsize > 0x7FFFFFFF:
rawio = self.MockRawIO()
bufio = self.tp(rawio)
Expand Down
6 changes: 3 additions & 3 deletions Lib/test/test_largefile.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,12 @@
import io # C implementation of io
import _pyio as pyio # Python implementation of io

# size of file to create (>2GB; 2GB == 2147483648 bytes)
# size of file to create (>2 GiB; 2 GiB == 2,147,483,648 bytes)
size = 2500000000

class LargeFileTest:
"""Test that each file function works as expected for large
(i.e. > 2GB) files.
(i.e. > 2 GiB) files.
"""

def setUp(self):
Expand Down Expand Up @@ -142,7 +142,7 @@ def setUpModule():
pass

# On Windows and Mac OSX this test comsumes large resources; It
# takes a long time to build the >2GB file and takes >2GB of disk
# takes a long time to build the >2 GiB file and takes >2 GiB of disk
# space therefore the resource must be enabled to run this test.
# If not, nothing after this line stanza will be executed.
if sys.platform[:3] == 'win' or sys.platform == 'darwin':
Expand Down
2 changes: 1 addition & 1 deletion Lib/test/test_mmap.py
Original file line number Diff line number Diff line change
Expand Up @@ -777,7 +777,7 @@ def test_large_filesize(self):
with mmap.mmap(f.fileno(), 0x10000, access=mmap.ACCESS_READ) as m:
self.assertEqual(m.size(), 0x180000000)

# Issue 11277: mmap() with large (~4GB) sparse files crashes on OS X.
# Issue 11277: mmap() with large (~4 GiB) sparse files crashes on OS X.

def _test_around_boundary(self, boundary):
tail = b' DEARdear '
Expand Down
4 changes: 2 additions & 2 deletions Lib/test/test_os.py
Original file line number Diff line number Diff line change
Expand Up @@ -171,7 +171,7 @@ def test_large_read(self, size):
with open(support.TESTFN, "rb") as fp:
data = os.read(fp.fileno(), size)

# The test does not try to read more than 2 GB at once because the
# The test does not try to read more than 2 GiB at once because the
# operating system is free to return less bytes than requested.
self.assertEqual(data, b'test')

Expand Down Expand Up @@ -2573,7 +2573,7 @@ def handle_error(self):
@unittest.skipUnless(hasattr(os, 'sendfile'), "test needs os.sendfile()")
class TestSendfile(unittest.TestCase):

DATA = b"12345abcde" * 16 * 1024 # 160 KB
DATA = b"12345abcde" * 16 * 1024 # 160 KiB
SUPPORT_HEADERS_TRAILERS = not sys.platform.startswith("linux") and \
not sys.platform.startswith("solaris") and \
not sys.platform.startswith("sunos")
Expand Down
2 changes: 1 addition & 1 deletion Lib/test/test_socket.py
Original file line number Diff line number Diff line change
Expand Up @@ -5299,7 +5299,7 @@ class SendfileUsingSendTest(ThreadedTCPSocketTest):
Test the send() implementation of socket.sendfile().
"""

FILESIZE = (10 * 1024 * 1024) # 10MB
FILESIZE = (10 * 1024 * 1024) # 10 MiB
BUFSIZE = 8192
FILEDATA = b""
TIMEOUT = 2
Expand Down
4 changes: 2 additions & 2 deletions Lib/test/test_tarfile.py
Original file line number Diff line number Diff line change
Expand Up @@ -779,12 +779,12 @@ class Bz2DetectReadTest(Bz2Test, DetectReadTest):
def test_detect_stream_bz2(self):
# Originally, tarfile's stream detection looked for the string
# "BZh91" at the start of the file. This is incorrect because
# the '9' represents the blocksize (900kB). If the file was
# the '9' represents the blocksize (900,000 bytes). If the file was
# compressed using another blocksize autodetection fails.
with open(tarname, "rb") as fobj:
data = fobj.read()

# Compress with blocksize 100kB, the file starts with "BZh11".
# Compress with blocksize 100,000 bytes, the file starts with "BZh11".
with bz2.BZ2File(tmpname, "wb", compresslevel=1) as fobj:
fobj.write(data)

Expand Down
8 changes: 4 additions & 4 deletions Lib/test/test_threading.py
Original file line number Diff line number Diff line change
Expand Up @@ -132,10 +132,10 @@ def f():
# Kill the "immortal" _DummyThread
del threading._active[ident[0]]

# run with a small(ish) thread stack size (256kB)
# run with a small(ish) thread stack size (256 KiB)
def test_various_ops_small_stack(self):
if verbose:
print('with 256kB thread stack size...')
print('with 256 KiB thread stack size...')
try:
threading.stack_size(262144)
except _thread.error:
Expand All @@ -144,10 +144,10 @@ def test_various_ops_small_stack(self):
self.test_various_ops()
threading.stack_size(0)

# run with a large thread stack size (1MB)
# run with a large thread stack size (1 MiB)
def test_various_ops_large_stack(self):
if verbose:
print('with 1MB thread stack size...')
print('with 1 MiB thread stack size...')
try:
threading.stack_size(0x100000)
except _thread.error:
Expand Down
2 changes: 1 addition & 1 deletion Lib/test/test_zipfile64.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ def zipTest(self, f, compression):
# Create the ZIP archive.
zipfp = zipfile.ZipFile(f, "w", compression)

# It will contain enough copies of self.data to reach about 6GB of
# It will contain enough copies of self.data to reach about 6 GiB of
# raw data to store.
filecount = 6*1024**3 // len(self.data)

Expand Down
4 changes: 2 additions & 2 deletions Lib/test/test_zlib.py
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ def test_same_as_binascii_crc32(self):
self.assertEqual(binascii.crc32(b'spam'), zlib.crc32(b'spam'))


# Issue #10276 - check that inputs >=4GB are handled correctly.
# Issue #10276 - check that inputs >=4 GiB are handled correctly.
class ChecksumBigBufferTestCase(unittest.TestCase):

@bigmemtest(size=_4G + 4, memuse=1, dry_run=False)
Expand Down Expand Up @@ -130,7 +130,7 @@ def test_overflow(self):
class BaseCompressTestCase(object):
def check_big_compress_buffer(self, size, compress_func):
_1M = 1024 * 1024
# Generate 10MB worth of random, and expand it by repeating it.
# Generate 10 MiB worth of random, and expand it by repeating it.
# The assumption is that zlib's memory is not big enough to exploit
# such spread out redundancy.
data = b''.join([random.getrandbits(8 * _1M).to_bytes(_1M, 'little')
Expand Down
2 changes: 1 addition & 1 deletion Lib/xmlrpc/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -1046,7 +1046,7 @@ def gzip_encode(data):
# in the HTTP header, as described in RFC 1952
#
# @param data The encoded data
# @keyparam max_decode Maximum bytes to decode (20MB default), use negative
# @keyparam max_decode Maximum bytes to decode (20 MiB default), use negative
# values for unlimited decoding
# @return the unencoded data
# @raises ValueError if data is not correctly coded.
Expand Down
4 changes: 2 additions & 2 deletions Misc/NEWS.d/3.5.0a1.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3035,7 +3035,7 @@ by Phil Elson.
.. section: Library

os.read() now uses a :c:func:`Py_ssize_t` type instead of :c:type:`int` for
the size to support reading more than 2 GB at once. On Windows, the size is
the size to support reading more than 2 GiB at once. On Windows, the size is
truncted to INT_MAX. As any call to os.read(), the OS may read less bytes
than the number of requested bytes.

Expand Down Expand Up @@ -3144,7 +3144,7 @@ by Pablo Torres Navarrete and SilentGhost.
.. nonce: u_oiv9
.. section: Library

ssl.RAND_add() now supports strings longer than 2 GB.
ssl.RAND_add() now supports strings longer than 2 GiB.

..

Expand Down
Loading