mirror of
https://github.com/jemalloc/jemalloc.git
synced 2026-05-16 01:46:23 +03:00
Allocate tcache stack from base allocator
When using metadata_thp, allocate tcache bin stacks from base0, which means they will be placed on huge pages along with other metadata, instead of mixed with other regular allocations. In order to do so, modified the base allocator to support limited reuse: freed tcached stacks (from thread termination) will be returned to base0 and made available for reuse, but no merging will be attempted since they were bump allocated out of base blocks. These reused base extents are managed using separately allocated base edata_t -- they are cached in base->edata_avail when the extent is all allocated. One tricky part is, stats updating must be skipped for such reused extents (since they were accounted for already, and there is no purging for base). This requires tracking the "if is reused" state explicitly and bypass the stats updates when allocating from them.
This commit is contained in:
parent
a442d9b895
commit
72cfdce718
7 changed files with 202 additions and 27 deletions
|
|
@ -14,6 +14,17 @@ cache_bin_info_init(cache_bin_info_t *info,
|
|||
info->ncached_max = (cache_bin_sz_t)ncached_max;
|
||||
}
|
||||
|
||||
bool
|
||||
cache_bin_stack_use_thp(void) {
|
||||
/*
|
||||
* If metadata_thp is enabled, allocating tcache stack from the base
|
||||
* allocator for efficiency gains. The downside, however, is that base
|
||||
* allocator never purges freed memory, and may cache a fair amount of
|
||||
* memory after many threads are terminated and not reused.
|
||||
*/
|
||||
return metadata_thp_enabled();
|
||||
}
|
||||
|
||||
void
|
||||
cache_bin_info_compute_alloc(cache_bin_info_t *infos, szind_t ninfos,
|
||||
size_t *size, size_t *alignment) {
|
||||
|
|
@ -31,10 +42,11 @@ cache_bin_info_compute_alloc(cache_bin_info_t *infos, szind_t ninfos,
|
|||
}
|
||||
|
||||
/*
|
||||
* Align to at least PAGE, to minimize the # of TLBs needed by the
|
||||
* smaller sizes; also helps if the larger sizes don't get used at all.
|
||||
* When not using THP, align to at least PAGE, to minimize the # of TLBs
|
||||
* needed by the smaller sizes; also helps if the larger sizes don't get
|
||||
* used at all.
|
||||
*/
|
||||
*alignment = PAGE;
|
||||
*alignment = cache_bin_stack_use_thp() ? QUANTUM : PAGE;
|
||||
}
|
||||
|
||||
void
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue