Disable munmap() if it causes VM map holes.

Add a configure test to determine whether common mmap()/munmap()
patterns cause VM map holes, and only use munmap() to discard unused
chunks if the problem does not exist.

Unify the chunk caching for mmap and dss.

Fix options processing to limit lg_chunk to be large enough that
redzones will always fit.
This commit is contained in:
Jason Evans 2012-04-12 20:20:58 -07:00
parent d6abcbb14b
commit 7ca0fdfb85
11 changed files with 277 additions and 244 deletions

View file

@ -501,11 +501,14 @@ malloc_conf_init(void)
CONF_HANDLE_BOOL(opt_abort, abort)
/*
* Chunks always require at least one * header page,
* plus one data page.
* Chunks always require at least one header page, plus
* one data page in the absence of redzones, or three
* pages in the presence of redzones. In order to
* simplify options processing, fix the limit based on
* config_fill.
*/
CONF_HANDLE_SIZE_T(opt_lg_chunk, lg_chunk, LG_PAGE+1,
(sizeof(size_t) << 3) - 1)
CONF_HANDLE_SIZE_T(opt_lg_chunk, lg_chunk, LG_PAGE +
(config_fill ? 2 : 1), (sizeof(size_t) << 3) - 1)
CONF_HANDLE_SIZE_T(opt_narenas, narenas, 1, SIZE_T_MAX)
CONF_HANDLE_SSIZE_T(opt_lg_dirty_mult, lg_dirty_mult,
-1, (sizeof(size_t) << 3) - 1)