jemalloc/include
LD-RW 65a7d19928 bin: enforce bin->lock ownership in bin_slab_reg_alloc()
bitmap_set() performs a plain (non-atomic) read-modify-write on every
level of the bitmap tree:

    g  = *gp;            /* READ                              */
    g ^= ZU(1) << bit;  /* MODIFY — thread-local copy        */
    *gp = g;             /* WRITE BACK — no barrier, no CAS  */

Two threads that reach bitmap_sfu() -> bitmap_set() concurrently on the
same slab bitmap — even for different bits that share a group word —
will clobber each other's write.  The clobbered bit still looks free on
the next allocation; bitmap_sfu() selects it again; the second call to
bitmap_set() aborts on:

    assert(!bitmap_get(bitmap, binfo, bit));   /* bitmap.h:220 */

or, once tree propagation begins for a newly-full group:

    assert(g & (ZU(1) << (bit & BITMAP_GROUP_NBITS_MASK)));  /* bitmap.h:237 */

Either assert calls abort() and produces the coredump reported in
issues #2875 and #2772.

The immediate callers (bin_malloc_with_fresh_slab,
bin_malloc_no_fresh_slab) already assert lock ownership, but
bin_slab_reg_alloc() itself had no such check, making it easy for new
call sites to silently bypass the requirement.

Fix:
- Thread tsdn_t *tsdn and bin_t *bin through bin_slab_reg_alloc() and
  call malloc_mutex_assert_owner() as the first statement.
- Update both internal callers (bin_malloc_with_fresh_slab,
  bin_malloc_no_fresh_slab) to pass the context they already hold.
- Document the locking contract in bin.h and the thread-safety
  constraint in bitmap.h directly above bitmap_set().

Note: bin_slab_reg_alloc_batch() is left unchanged because it has one
legitimate unlocked caller (arena_fill_small_fresh) which operates on
freshly allocated slabs that are not yet visible to any other thread.
Its locking contract is now documented in bin.h.

Fixes #2875
2026-04-10 20:45:51 +03:00
..
jemalloc bin: enforce bin->lock ownership in bin_slab_reg_alloc() 2026-04-10 20:45:51 +03:00
msvc_compat Reformat the codebase with the clang-format 18. 2026-03-10 18:14:33 -07:00