These contain the following groups of patches: 1. Slab allocator code consolidation and fixing of inconsistencies This makes ZERO_SIZE_PTR generic so that it works in all slab allocators. It adds __GFP_ZERO support to all slab allocators and cleans up the zeroing in the slabs and provides modifications to remove explicit zeroing following kmalloc_node and kmem_cache_alloc_node calls. 2. SLUB improvements Inline some small functions to reduce code size. Some more memory optimizations using CONFIG_SLUB_DEBUG. Changes to handling of the slub_lock and an optimization of runtime determination of kmalloc slabs (replaces ilog2 patch that failed with gcc 3.3 on powerpc). 3. Slab defragmentation This is V3 of the patchset with the one fix for the locking problem that showed up during testing. 4. Performance optimizations These patches have a long history since the early drafts of SLUB. The problem with these patches is that they require the touching of additional cachelines (only for read) and SLUB was designed for minimal cacheline touching. In doing so we may be able to remove cacheline bouncing in particular for remote alloc/ free situations where I have had reports of issues that I was not able to confirm for lack of specificity. The tradeoffs here are not clear. Certainly the larger cacheline footprint will hurt the casual slab user somewhat but it will benefit processes that perform these local/remote alloc/free operations. I'd appreciate if someone could evaluate these. The complete patchset against 2.6.22-rc4-mm2 is available at http://ftp.kernel.org/pub/linux/kernel/people/christoph/slub/2.6.22-rc4-mm2 Tested on x86_64 SMP x86_64 NUMA emulation IA64 emulator Altix 64p/128G NUMA system. Altix 8p/6G asymmetric NUMA system. -- - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [email protected] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
- Follow-Ups:
- Re: [patch 00/26] Current slab allocator / SLUB patch queue
- From: Michal Piotrowski <[email protected]>
- [patch 10/26] SLUB: Faster more efficient slab determination for __kmalloc.
- From: [email protected]
- [patch 16/26] Slab defragmentation: Support defragmentation for extX filesystem inodes
- From: [email protected]
- [patch 17/26] Slab defragmentation: Support inode defragmentation for xfs
- From: [email protected]
- [patch 19/26] Slab defragmentation: Support reiserfs inode defragmentation
- From: [email protected]
- [patch 26/26] SLUB: Place kmem_cache_cpu structures in a NUMA aware way.
- From: [email protected]
- [patch 25/26] SLUB: Add an object counter to the kmem_cache_cpu structure
- From: [email protected]
- [patch 24/26] SLUB: Avoid page struct cacheline bouncing due to remote frees to cpu slab
- From: [email protected]
- [patch 21/26] Slab defragmentation: support dentry defragmentation
- From: [email protected]
- [patch 23/26] SLUB: Move sysfs operations outside of slub_lock
- From: [email protected]
- [patch 22/26] SLUB: kmem_cache_vacate to support page allocator memory defragmentation
- From: [email protected]
- [patch 20/26] Slab defragmentation: Support inode defragmentation for sockets
- From: [email protected]
- [patch 18/26] Slab defragmentation: Support procfs inode defragmentation
- From: [email protected]
- [patch 15/26] Slab defrag: Support generic defragmentation for inode slab caches
- From: [email protected]
- [patch 14/26] SLUB: Logic to trigger slab defragmentation from memory reclaim
- From: [email protected]
- [patch 12/26] SLUB: Slab defragmentation core
- From: [email protected]
- [patch 13/26] SLUB: Extend slabinfo to support -D and -C options
- From: [email protected]
- [patch 11/26] SLUB: Add support for kmem_cache_ops
- From: [email protected]
- [patch 09/26] SLUB: Do proper locking during dma slab creation
- From: [email protected]
- [patch 08/26] SLUB: Extract dma_kmalloc_cache from get_cache.
- From: [email protected]
- [patch 07/26] SLUB: Add some more inlines and #ifdef CONFIG_SLUB_DEBUG
- From: [email protected]
- [patch 03/26] Slab allocators: Consistent ZERO_SIZE_PTR support and NULL result semantics
- From: [email protected]
- [patch 06/26] Slab allocators: Replace explicit zeroing with __GFP_ZERO
- From: [email protected]
- [patch 05/26] Slab allocators: Cleanup zeroing allocations
- From: [email protected]
- [patch 02/26] Slab allocators: Consolidate code for krealloc in mm/util.c
- From: [email protected]
- [patch 04/26] Slab allocators: Support __GFP_ZERO in all allocators.
- From: [email protected]
- [patch 01/26] SLUB Debug: Fix initial object debug state of NUMA bootstrap objects
- From: [email protected]
- Re: [patch 00/26] Current slab allocator / SLUB patch queue
- Prev by Date: [patch 05/26] Slab allocators: Cleanup zeroing allocations
- Next by Date: [patch 06/26] Slab allocators: Replace explicit zeroing with __GFP_ZERO
- Previous by thread: [PATCH] the memset operation on a automatic array variable should be optimized out by data initialization
- Next by thread: [patch 01/26] SLUB Debug: Fix initial object debug state of NUMA bootstrap objects
- Index(es):