Jason, thank you _so_ much for finding the underlying cause of this.
On Sun, 2007-09-02 at 06:20 +0200, Nick Piggin wrote:
> Hmm, thanks for that. It does sound like it is deadlocking via
> commit_write(). OTOH, it seems like it could be using the page
> before it is uptodate -- it _may_ only be dealing with uptodate
> data at that point... but if so, why even read_cache_page at
> all?
jffs2_readpage() is synchronous -- there's no chance that the page won't
be up to date. We're doing this for garbage collection -- if there are
many log entries covering a single page of data, we want to write out a
single replacement which covers the whole page, obsoleting the previous
suboptimal representation of the same data.
> However, it is a regression. So unless David can come up with a
> more satisfactory approach, I guess we'd have to go with your
> patch.
I think Jason's patch is the best answer for the moment. At some point
in the very near future I want to improve the RAM usage and compression
ratio by dropping the rule that data nodes may not cross page boundaries
-- in which case garbage collection will need to do something other than
reading the page using read_cache_page() and then writing it out again;
it'll probably need to end up using its own internal buffer. But for
now, Jason's patch looks good.
Thanks.
--
dwmw2
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]