On Mon, 17 Apr 2006 18:57:40 -0700 (PDT)
Christoph Lameter <[email protected]> wrote:
> On Tue, 18 Apr 2006, KAMEZAWA Hiroyuki wrote:
>
> > BTW, when copying mm, mm->mmap_sem is held. Is mm->mmap_sem is not held while
> > page migraion now ? I'm sorry I can't catch up all changes.
> > or Is this needed for lazy migration (migration-on-fault) ?
>
> mmap_sem must be held during page migration due to the way we retrieve the
> anonymous vma.
>
> I think you would want to get rid of that requirement for the hotplug
> remove.
yes.
> But how do we reliably get to the anon_vma of the page without mmap_sem?
>
>
I think following patch will help. but this increases complexity...
-Kame
=
hold anon_vma->lock under migration.
While migration, page_mapcount(page) goes down to 0 and page->mapping is valid.
This breaks assumptions around page_mapcount() and page->mapping.
(See rmap.c, page_remove_rmap())
If mmap->sem is held while migration, there is no problem. But if mmap->sem is
not held, this is a race.
This patch locks anon_vma under migration.
Signed-Off-By: KAMEZAWA Hiroyuki <[email protected]>
Index: Christoph-NewMigrationV2/mm/migrate.c
===================================================================
--- Christoph-NewMigrationV2.orig/mm/migrate.c
+++ Christoph-NewMigrationV2/mm/migrate.c
@@ -178,6 +178,20 @@ out:
}
/*
+ * When mmap->sem is not held, we have to guarantee anon_vma is not freed.
+ */
+static void migrate_lock_anon_vma(struct page *page)
+{
+ unsigned long mapping;
+ struct anon_vma *anon_vma;
+ struct vm_area_struct *vma;
+
+ if (PageAnon(page))
+ page_lock_anon_vma(page);
+ /* remove migration ptes will unlock */
+}
+
+/*
* Get rid of all migration entries and replace them by
* references to the indicated page.
*
@@ -196,10 +210,9 @@ static void remove_migration_ptes(struct
return;
/*
- * We hold the mmap_sem lock. So no need to call page_lock_anon_vma.
+ * anon_vma is preserved and locked while migration.
*/
anon_vma = (struct anon_vma *) (mapping - PAGE_MAPPING_ANON);
- spin_lock(&anon_vma->lock);
list_for_each_entry(vma, &anon_vma->head, anon_vma_node)
remove_migration_pte(vma, page_address_in_vma(new, vma),
@@ -371,6 +384,7 @@ int migrate_page(struct page *newpage, s
BUG_ON(PageWriteback(page)); /* Writeback must be complete */
+ migrate_lock_anon_vma(page);
rc = migrate_page_remove_references(newpage, page,
page_mapping(page) ? 2 : 1);
@@ -378,7 +392,6 @@ int migrate_page(struct page *newpage, s
remove_migration_ptes(page, page);
return rc;
}
-
migrate_page_copy(newpage, page);
remove_migration_ptes(page, newpage);
return 0;
Index: Christoph-NewMigrationV2/mm/rmap.c
===================================================================
--- Christoph-NewMigrationV2.orig/mm/rmap.c
+++ Christoph-NewMigrationV2/mm/rmap.c
@@ -160,7 +160,7 @@ void anon_vma_unlink(struct vm_area_stru
empty = list_empty(&anon_vma->head);
spin_unlock(&anon_vma->lock);
- if (empty)
+ if (empty && !anon_vma->async_refernece)
anon_vma_free(anon_vma);
}
@@ -717,7 +717,13 @@ static int try_to_unmap_anon(struct page
struct vm_area_struct *vma;
int ret = SWAP_AGAIN;
- anon_vma = page_lock_anon_vma(page);
+ if (migration) { /* anon_vma->lock is held under migration */
+ unsigned long mapping;
+ mapping = (unsigned long)page->mapping - PAGE_MAPPING_ANON;
+ anon_vma = (struct anon_vma *)mapping;
+ } else {
+ anon_vma = page_lock_anon_vma(page);
+ }
if (!anon_vma)
return ret;
@@ -726,7 +732,8 @@ static int try_to_unmap_anon(struct page
if (ret == SWAP_FAIL || !page_mapped(page))
break;
}
- spin_unlock(&anon_vma->lock);
+ if (!migration)
+ spin_unlock(&anon_vma->lock);
return ret;
}
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]