Re: /proc dcache deadlock in do_exit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Andrew Morton <[email protected]> writes:

> On Tue, 27 Nov 2007 14:20:22 +0100
> Andrea Arcangeli <[email protected]> wrote:
>
>> Hi,
>> 
>> this patch fixes a sles9 system hang in start_this_handle from a
>> customer with some heavy workload where all tasks are waiting on
>> kjournald to commit the transaction, but kjournald waits on t_updates
>> to go down to zero (it never does). This was reported as a lowmem
>> shortage deadlock but when checking the debug data I noticed the VM
>> wasn't under pressure at all (well it was really under vm pressure,
>> because lots of tasks hanged in the VM prune_dcache methods trying to
>> flush dirty inodes, but no task was hanging in GFP_NOFS mode, the
>> holder of the journal handle should have if this was a vm issue in the
>> first place). No task was apparently holding the leftover handle in
>> the committing transaction, so I deduced t_updates was stuck to 1
>> because a journal_stop was never run by some path (this turned out to
>> be correct). With a debug patch adding proper reverse links and stack
>> trace logging in ext3 deployed in production, I found journal_stop is
>> never run because mark_inode_dirty_sync is called inside release_task
>> called by do_exit. (that was quite fun because I would have never
>> thought about this subtleness, I thought a regular path in ext3 had a
>> bug and it forgot to call journal_stop)
>> 
>> do_exit->release_task->mark_inode_dirty_sync->schedule() (will never
>> come back to run journal_stop)
>
> I don't see why the schedule() will not return?  Because the task has
> PF_EXITING set?  Doesn't TASK_DEAD do that?

Yes, why do we not come back from schedule?

If we are not allowed to schedule after setting PF_EXITING before
we set TASK_DEAD that entire code path sounds brittle and
error prone.


> What are the implications of not running shrink_dcache_parent() on the exit
> path sometimes?  We'll leave procfs stuff behind?  Will they be reaped by
> memory pressure later on?

It should.  I think the reaping is just an optimization.  Because we
know we will never need those dentries again, and we can pin them by
open directories or opening files.  What I don't know off the top of
my head is if there is a d_drop equivalent going on that might be a
problem if we don't address it.

Eric

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux