On Mon, 3 Sep 2007, Jiri Kosina wrote:
> the problem I am seeing with __weak functions is that as far as I can
> see, gcc 4.1.0 optimizes the empty __weak function away with -O2, so it
> is not later properly overridden by the other non-weak function, as the
> callsite already doesn't have the corresponding call. (when I stick a
> printk() into the __weak function, everything works fine - it is not
> optimized away and non-weak version of the function gets called). I
> persume this is a bug in gcc (4.1.1 doesn't seem to expose this
> behavior). I will look at it a little bit more.
OK, the problem was crappy 4.1.1 gcc on my side (that has been a known bug
and already fixed in later update). Updated patch below, thanks for your
suggestions Franck.
From: Jiri Kosina <[email protected]>
i386 and x86_64: randomize brk()
This patch randomizes the location of the heap (brk) for i386 and x86_64.
The range is randomized in the range starting at current brk location up
to 0x02000000 offset for both architectures. This, together with
pie-executable-randomization.patch and
pie-executable-randomization-fix.patch, should make the address space
randomization on i386 and x86_64 complete.
Signed-off-by: Jiri Kosina <[email protected]>
diff --git a/arch/i386/kernel/process.c b/arch/i386/kernel/process.c
index 8466471..fb3d407 100644
--- a/arch/i386/kernel/process.c
+++ b/arch/i386/kernel/process.c
@@ -949,3 +949,17 @@ unsigned long arch_align_stack(unsigned long sp)
sp -= get_random_int() % 8192;
return sp & ~0xf;
}
+
+void arch_randomize_brk(struct mm_struct *mm)
+{
+ unsigned long new_brk;
+ unsigned long range_start;
+ unsigned long range_end;
+
+ range_start = mm->brk;
+ range_end = range_start + 0x02000000;
+ new_brk = randomize_range(range_start, range_end, 0);
+ if (new_brk)
+ mm->brk = mm->start_brk = new_brk;
+}
+
diff --git a/arch/x86_64/kernel/process.c b/arch/x86_64/kernel/process.c
index 2842f50..de40057 100644
--- a/arch/x86_64/kernel/process.c
+++ b/arch/x86_64/kernel/process.c
@@ -902,3 +902,17 @@ unsigned long arch_align_stack(unsigned long sp)
sp -= get_random_int() % 8192;
return sp & ~0xf;
}
+
+void arch_randomize_brk(struct mm_struct *mm)
+{
+ unsigned long new_brk;
+ unsigned long range_start;
+ unsigned long range_end;
+
+ range_start = mm->brk;
+ range_end = range_start + 0x02000000;
+ new_brk = randomize_range(range_start, range_end, 0);
+ if (new_brk)
+ mm->brk = mm->start_brk = new_brk;
+}
+
diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
index d65f1d9..4bf0ca1 100644
--- a/fs/binfmt_elf.c
+++ b/fs/binfmt_elf.c
@@ -47,6 +47,9 @@ static int load_elf_binary(struct linux_binprm *bprm, struct pt_regs *regs);
static int load_elf_library(struct file *);
static unsigned long elf_map (struct file *, unsigned long, struct elf_phdr *, int, int, unsigned long);
+/* overriden by architectures supporting brk randomization */
+void __weak arch_randomize_brk(struct mm_struct *mm) { }
+
/*
* If we don't support core dumping, then supply a NULL so we
* don't even try.
@@ -1073,6 +1076,9 @@ static int load_elf_binary(struct linux_binprm *bprm, struct pt_regs *regs)
current->mm->end_data = end_data;
current->mm->start_stack = bprm->p;
+ if (current->flags & PF_RANDOMIZE)
+ arch_randomize_brk(current->mm);
+
if (current->personality & MMAP_PAGE_ZERO) {
/* Why this, you ask??? Well SVr4 maps page 0 as read-only,
and some applications "depend" upon this behavior.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]