Re: How to handle a hugepage with bad physical memory?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 17 Nov 2005, Robin Holt wrote:

> > > With the new hugepages concept, would it be possible to only mark
> > > the default pagesize portion of a hugepage as bad and then return the
> > > remainder of the hugepage for normal use?  What would we basically need
> > > to do to accomplish this?  Are there patches in the community which we
> > > should wait to see how they progress before we do any work on this front?
> > 
> > On IA64 we have one PTE for a huge page in a different region, so we 
> > cannot unmap a page sized section. Other architectures may have PTEs for 
> > each page sized section of a huge page. For those it may make sense 
> > (but then the management of the page is done via the first page_struct, 
> > which likely results in some challenging VM issues).
> 
> I think you misunderstood me.  I was talking about killing the process.
> All the mappings get destroyed.  I want to reclaim as much of that huge
> page as possible.

You are right. If you can reclaim the whole page (as you can when killing 
a process) then you can isolate the bad page and free the rest. There 
would have to be special code for that in the hugetlb layer.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux