Attached is the 2.6.19 patch.
Attachment:
pagemaps.patch
Description: Binary data
On Dec 7, 2006, at 2:36 PM, Andrew Morton wrote:
On Thu, 07 Dec 2006 14:09:40 -0800 David Singleton <[email protected]> wrote:Andrew,this implements a feature for memory analysis tools to go along withsmaps. It shows reference counts for individual pages instead of aggregate totals for a given VMA. It helps memory analysis tools determine how well pages are being shared, or not, in a shared libraries, etc. The per page information is presented in /proc/<pid>/pagemaps.I think the concept is not a bad one, frankly - this requirement arises frequently. What bugs me is that it only displays the mapcount anddirtiness. Perhaps there are other things which people want to know. I'mnot sure what they would be though. I wonder if it would be insane to display the info via a filesystem: cat /mnt/pagemaps/$(pidof crond)/pgd0/pmd1/pte45 Probably it would.Index: linux-2.6.18/Documentation/filesystems/proc.txtAgainst 2.6.18? I didn't know you could still buy copies of that ;) This patch's changelog should include sample output. Your email client wordwraps patches, and it replaces tabs with spaces....+static void pagemaps_pte_range(struct vm_area_struct *vma, pmd_t *pmd,+ unsigned long addr, unsigned long end, + struct seq_file *m) +{ + pte_t *pte, ptent; + spinlock_t *ptl; + struct page *page; + int mapcount = 0; + + pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl); + do { + ptent = *pte; + if (pte_present(ptent)) { + page = vm_normal_page(vma, addr, ptent); + if (page) { + if (pte_dirty(ptent))+ mapcount = -page_mapcount(page);+ else+ mapcount = page_mapcount(page);+ } else { + mapcount = 1; + } + } + seq_printf(m, " %d", mapcount); + + } while (pte++, addr += PAGE_SIZE, addr != end);Well that's cute. As long as both seq_file and pte-pages are of sizePAGE_SIZE, and as long as pte's are more than three bytes, this will notoverflow the seq_file output buffer. hm. Unless the pages are all dirty and the mapcounts are all 10000. I think it will overflow then?++static inline void pagemaps_pmd_range(struct vm_area_struct *vma, pud_t*pud, + unsigned long addr, unsigned long end, + struct seq_file *m) +{ + pmd_t *pmd; + unsigned long next; + + pmd = pmd_offset(pud, addr); + do { + next = pmd_addr_end(addr, end); + if (pmd_none_or_clear_bad(pmd)) + continue; + pagemaps_pte_range(vma, pmd, addr, next, m); + } while (pmd++, addr = next, addr != end); +} ++static inline void pagemaps_pud_range(struct vm_area_struct *vma, pgd_t*pgd, + unsigned long addr, unsigned long end, + struct seq_file *m) +{ + pud_t *pud; + unsigned long next; + + pud = pud_offset(pgd, addr); + do { + next = pud_addr_end(addr, end); + if (pud_none_or_clear_bad(pud)) + continue; + pagemaps_pmd_range(vma, pud, addr, next, m); + } while (pud++, addr = next, addr != end); +} + +static inline void pagemaps_pgd_range(struct vm_area_struct *vma, + unsigned long addr, unsigned long end, + struct seq_file *m) +{ + pgd_t *pgd; + unsigned long next; + + pgd = pgd_offset(vma->vm_mm, addr); + do { + next = pgd_addr_end(addr, end); + if (pgd_none_or_clear_bad(pgd)) + continue; + pagemaps_pud_range(vma, pgd, addr, next, m); + } while (pgd++, addr = next, addr != end); +}I think that's our eighth open-coded pagetable walker. Apparently they areall slightly different. Perhaps we shouild do something about that one day.
- Follow-Ups:
- Re: new procfs memory analysis feature
- From: Andrew Morton <[email protected]>
- Re: new procfs memory analysis feature
- References:
- new procfs memory analysis feature
- From: David Singleton <[email protected]>
- Re: new procfs memory analysis feature
- From: Andrew Morton <[email protected]>
- new procfs memory analysis feature
- Prev by Date: [PATCH 005 of 5] md: Allow mddevs to live a bit longer to avoid a loop with udev.
- Next by Date: [PATCH 002 of 5] md: Return a non-zero error to bi_end_io as appropriate in raid5.
- Previous by thread: Re: new procfs memory analysis feature
- Next by thread: Re: new procfs memory analysis feature
- Index(es):