Re: [PATCH] rd: Use a private inode for backing storage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Monday 22 October 2007 04:39, Eric W. Biederman wrote:
> Nick Piggin <[email protected]> writes:
> > On Sunday 21 October 2007 18:23, Eric W. Biederman wrote:
> >> Christian Borntraeger <[email protected]> writes:
> >>
> >> Let me put it another way.  Looking at /proc/slabinfo I can get
> >> 37 buffer_heads per page.  I can allocate 10% of memory in
> >> buffer_heads before we start to reclaim them.  So it requires just
> >> over 3.7 buffer_heads on very page of low memory to even trigger
> >> this case.  That is a large 1k filesystem or a weird sized partition,
> >> that we have written to directly.
> >
> > On a highmem machine it it could be relatively common.
>
> Possibly.  But the same proportions still hold.  1k filesystems
> are not the default these days and ramdisks are relatively uncommon.
> The memory quantities involved are all low mem.

You don't need 1K filesystems to have buffers attached though,
of course. You can hit the limit with a 4K filesystem with less
than 8GB in pagecache I'd say.



> > You don't want to change that for a stable patch, however.
> > It fixes the bug.
>
> No it avoids the bug which is something slightly different.
> Further I contend that it is not obviously correct that there
> are no other side effects (because it doesn't actually fix the
> bug), and that makes it of dubious value for a backport.

The bug in question isn't exactly that it uses buffercache for its
backing store (although that's obviously rather hairy as well). It's
this specific problem sequence. And it does fix the problem.


> If I had to slap a patch on there at this point just implementing
> an empty try_to_release_page (which disables try_to_free_buffers)
> would be my choice.

How is that better? Now you're making the exact same change for
all filesystems that you didn't think was obviously correct for
rd.c.


> > I just don't think what you have is the proper fix. Calling
> > into the core vfs and vm because right now it does something
> > that works for you but is completely unrelated to what you
> > are conceptually doing is not the right fix.
>
> I think there is a strong conceptual relation and other code
> doing largely the same thing is already in the kernel (ramfs).  Plus
> my gut feel says shared code will make maintenance easier.

ramfs is rather a different case. Filesystems intimately know
about the pagecache.


> You do have a point that the reuse may not be perfect and if that
> is the case we need to watch out for the potential to mess things
> up.
>
> So far I don't see any problems with the reuse.

It's just wrong. I guess if you don't see that by now, then we
have to just disagree.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux