mirror of
https://github.com/jemalloc/jemalloc.git
synced 2026-04-28 13:52:14 +03:00
Hugepages are really hard to get. Currently, we are waiting until we fill memory region up with data to at least `hpa_hugification_threshold` and then wait for `hpa_hugify_delay_ms` before we hugify pageslab. For this reason it seems wasteful to treat hugified pageslabs in the same way as non-hugified ones. Based on that observation two ideas come to mind. We should try to prioritize placing allocation on hugified pageslab to get performance improvements from hugepage usage immediately. While there are maybe a better (in terms of fragmentation) pageslab currently available, empty space on a hugepage just sitting there, waiting for a better allocation to appear, which might never happen. This unused memory on a hugepage is counted towards out usage anyway, we better use it for good. Same reasoning is applicable for purging prioritization. If we purge hugepage (`madvise(..., MADV_DONTNEED)`) we'll need to start over again to assemble it back: filling it up and waiting. Moreover, we might never assemble hugepage again, because kernel doesn't have continuous 2 MiB regions anymore. Instead, we should purge non-huge pageslabs as long as we can, because they are much cheaper to purge and does not provide any performance benefits. |
||
|---|---|---|
| .. | ||
| jemalloc | ||
| msvc_compat | ||