diff --git a/doc/jemalloc.xml.in b/doc/jemalloc.xml.in
index 8111fc1d..fc01ad1b 100644
--- a/doc/jemalloc.xml.in
+++ b/doc/jemalloc.xml.in
@@ -501,13 +501,11 @@ for (i = 0; i < nbins; i++) {
possible to find metadata for user objects very quickly.
User objects are broken into three categories according to size:
- small, large, and huge. Small objects are smaller than one page. Large
- objects are smaller than the chunk size. Huge objects are a multiple of
- the chunk size. Small and large objects are managed entirely by arenas;
- huge objects are additionally aggregated in a single data structure that is
- shared by all threads. Huge objects are typically used by applications
- infrequently enough that this single data structure is not a scalability
- issue.
+ small, large, and huge. Small and large objects are managed entirely by
+ arenas; huge objects are additionally aggregated in a single data structure
+ that is shared by all threads. Huge objects are typically used by
+ applications infrequently enough that this single data structure is not a
+ scalability issue.
Each chunk that is managed by an arena tracks its contents as runs of
contiguous pages (unused, backing a set of small objects, or backing one
@@ -516,18 +514,18 @@ for (i = 0; i < nbins; i++) {
allocations in constant time.Small objects are managed in groups by page runs. Each run maintains
- a frontier and free list to track which regions are in use. Allocation
- requests that are no more than half the quantum (8 or 16, depending on
- architecture) are rounded up to the nearest power of two that is at least
- sizeof(double). All other small
- object size classes are multiples of the quantum, spaced such that internal
- fragmentation is limited to approximately 25% for all but the smallest size
- classes. Allocation requests that are larger than the maximum small size
- class, but small enough to fit in an arena-managed chunk (see the opt.lg_chunk option), are
- rounded up to the nearest run size. Allocation requests that are too large
- to fit in an arena-managed chunk are rounded up to the nearest multiple of
- the chunk size.
+ a bitmap to track which regions are in use. Allocation requests that are no
+ more than half the quantum (8 or 16, depending on architecture) are rounded
+ up to the nearest power of two that is at least sizeof(double). All other object size
+ classes are multiples of the quantum, spaced such that there are four size
+ classes for each doubling in size, which limits internal fragmentation to
+ approximately 20% for all but the smallest size classes. Small size classes
+ are smaller than four times the page size, large size classes are smaller
+ than the chunk size (see the opt.lg_chunk option), and
+ huge size classes extend from the chunk size up to one size class less than
+ the full address space size.
Allocations are packed tightly together, which can be an issue for
multi-threaded applications. If you need to assure that allocations do not
@@ -554,13 +552,13 @@ for (i = 0; i < nbins; i++) {
- Small
+ Smalllg[8]16
- [16, 32, 48, ..., 128]
+ [16, 32, 48, 64, 80, 96, 112, 128]32
@@ -580,17 +578,77 @@ for (i = 0; i < nbins; i++) {
512
- [2560, 3072, 3584]
+ [2560, 3072, 3584, 4096]
+
+
+ 1 KiB
+ [5 KiB, 6 KiB, 7 KiB, 8 KiB]
+
+
+ 2 KiB
+ [10 KiB, 12 KiB, 14 KiB]
+
+
+ Large
+ 2 KiB
+ [16 KiB]
- Large4 KiB
- [4 KiB, 8 KiB, 12 KiB, ..., 4072 KiB]
+ [20 KiB, 24 KiB, 28 KiB, 32 KiB]
+
+
+ 8 KiB
+ [40 KiB, 48 KiB, 54 KiB, 64 KiB]
+
+
+ 16 KiB
+ [80 KiB, 96 KiB, 112 KiB, 128 KiB]
+
+
+ 32 KiB
+ [160 KiB, 192 KiB, 224 KiB, 256 KiB]
+
+
+ 64 KiB
+ [320 KiB, 384 KiB, 448 KiB, 512 KiB]
+
+
+ 128 KiB
+ [640 KiB, 768 KiB, 896 KiB, 1024 KiB]
+
+
+ 256 KiB
+ [1280 KiB, 1536 KiB, 1792 KiB, 2048 KiB]
+
+
+ 512 KiB
+ [2560 KiB, 3072 KiB, 3584 KiB]
+
+
+ Huge
+ 512 KiB
+ [4 MiB]
+
+
+ 1 MiB
+ [5 MiB, 6 MiB, 7 MiB, 8 MiB]
+
+
+ 2 MiB
+ [10 MiB, 12 MiB, 14 MiB, 16 MiB]
- Huge4 MiB
- [4 MiB, 8 MiB, 12 MiB, ...]
+ [20 MiB, 24 MiB, 28 MiB, 32 MiB]
+
+
+ 8 MiB
+ [40 MiB, 48 MiB, 56 MiB, 64 MiB]
+
+
+ ...
+ ...