Re: [PATCH] lazy freeing of memory through MADV_FREE

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Rik van Riel wrote:
Use TLB batching for MADV_FREE.  Adds another 10-15% extra performance
to the MySQL sysbench results on my quad core system.

Signed-off-by: Rik van Riel <[email protected]>
---
Rik van Riel wrote:

I've added a 5th column, with just your mmap_sem patch and
without my madv_free patch.  It is run with the glibc patch,
which should make it fall back to MADV_DONTNEED after the
first MADV_FREE call fails.


With the attached patch to make MADV_FREE use tlb batching, not
only do we gain an additional 10-15% performance but Nick's
mmap_sem patch also shows the performance increase that we
expected to see.

It looks like the tlb flushes (and IPIs) from zap_pte_range()
could have been the problem.  They're gone now.

I guess it is a good idea to batch these things. But can you
do that on all architectures? What happens if your tlb flush
happens after another thread already accesses it again, or
after it subsequently gets removed from the address space via
another CPU?


The second column from the right has Nick's patch and my own
two patches.  Performance with 16 threads is almost triple what
it used to be...

vanilla   glibc  glibc      glibc        glibc      glibc      glibc
                 madv_free  madv_free               madv_free madv_free
                            mmap_sem     mmap_sem   mmap_sem
                                                    tlb batch  tlb_batch
threads

 1     610     609     596         545         534     547     537
 2    1032    1136    1196        1200        1180    1293    1194
 4    1070    1128    2014        2024        2027    2248    2040
 8    1000    1088    1665        2087        2089    2314    1869
 16    779    1073    1310        1999        2012    2214    1557


Now that I think about it - this is all with the rawhide kernel
configuration, which has an ungodly number of debug config
options enabled.

I should try this with a more normal kernel, on various different
systems.


This is for another day. :)

First some ebizzy runs...


------------------------------------------------------------------------

--- linux-2.6.20.x86_64/mm/memory.c.orig	2007-04-23 02:48:36.000000000 -0400
+++ linux-2.6.20.x86_64/mm/memory.c	2007-04-23 02:54:42.000000000 -0400
@@ -677,11 +677,15 @@ static unsigned long zap_pte_range(struc
 						remove_exclusive_swap_page(page);
 						unlock_page(page);
 					}
-					ptep_clear_flush_dirty(vma, addr, pte);
-					ptep_clear_flush_young(vma, addr, pte);
 					SetPageLazyFree(page);
 					if (PageActive(page))
 						deactivate_tail_page(page);
+					ptent = *pte;
+					set_pte_at(mm, addr, pte,
+						pte_mkclean(pte_mkold(ptent)));
+					/* tlb_remove_page frees it again */
+					get_page(page);
+					tlb_remove_page(tlb, page);
 					continue;
 				}
 			}


--
SUSE Labs, Novell Inc.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux