Recover one bit by additionally using _PAGE_NEWPROT. Since I wasn't sure this
would work, I've split this out. We rely on the fact that pte_newprot always
checks first if the PTE is marked present.
Signed-off-by: Paolo 'Blaisorblade' Giarrusso <[email protected]>
---
linux-2.6.git-paolo/include/asm-um/pgtable-2level.h | 10 +++++-----
1 files changed, 5 insertions(+), 5 deletions(-)
diff -puN include/asm-um/pgtable-2level.h~rfp-arch-uml-improv include/asm-um/pgtable-2level.h
--- linux-2.6.git/include/asm-um/pgtable-2level.h~rfp-arch-uml-improv 2005-08-07 19:09:34.000000000 +0200
+++ linux-2.6.git-paolo/include/asm-um/pgtable-2level.h 2005-08-07 19:09:34.000000000 +0200
@@ -72,19 +72,19 @@ static inline void set_pte(pte_t *pteptr
((unsigned long) __va(pmd_val(pmd) & PAGE_MASK))
/*
- * Bits 0 to 5 are taken, split up the 26 bits of offset
+ * Bits 0, 1, 3 to 5 are taken, split up the 27 bits of offset
* into this range:
*/
-#define PTE_FILE_MAX_BITS 26
+#define PTE_FILE_MAX_BITS 27
-#define pte_to_pgoff(pte) (pte_val(pte) >> 6)
+#define pte_to_pgoff(pte) (((pte_val(pte) >> 6) << 1) | ((pte_val(pte) >> 2) & 0x1))
#define pte_to_pgprot(pte) \
__pgprot((pte_val(pte) & (_PAGE_RW | _PAGE_PROTNONE)) \
| ((pte_val(pte) & _PAGE_PROTNONE) ? 0 : \
(_PAGE_USER | _PAGE_PRESENT)) | _PAGE_ACCESSED)
#define pgoff_prot_to_pte(off, prot) \
- ((pte_t) { ((off) << 6) + \
- (pgprot_val(prot) & (_PAGE_RW | _PAGE_PROTNONE)) + _PAGE_FILE })
+ __pte((((off) >> 1) << 6) + (((off) & 0x1) << 2) + \
+ (pgprot_val(prot) & (_PAGE_RW | _PAGE_PROTNONE)) + _PAGE_FILE)
#endif
_
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
|
|