* Ingo Molnar <[email protected]> wrote:
> That callback will be queued on CPU#2 - while the task still keeps
> current->rcu_data of CPU#1. It also means that CPU#2's read counter
> did _not_ get increased - and a too short grace period may occur.
>
> it seems to me that that only safe method is to pick an 'RCU CPU' when
> first entering the read section, and then sticking to it, no matter
> where the task gets migrated to. Or to 'migrate' the +1 read count
> from one CPU to the other, within the scheduler.
i think the 'migrate read-count' method is not adequate either, because
all callbacks queued within an RCU read section must be called after the
lock has been dropped - while with the migration method CPU#1 would be
free to process callbacks queued in the RCU read section still active on
CPU#2.
i'm wondering how much of a problem this is though. Can there be stale
pointers at that point? Yes in theory, because code like:
rcu_read_lock();
call_rcu(&dentry->d_rcu, d_callback);
func(dentry->whatever);
rcu_read_unlock();
would be unsafe because the pointer is still accessed within the RCU
read section, and if we get migrated from CPU#1 to CPU#2 after call_rcu
but before dentry->whatever dereference, the callback may be processed
early by CPU#1, making the dentry->whatever read operation unsafe.
the question is, does this occur in practice? Does existing RCU-using
code use pointers it has queued for freeing, relying on the fact that
the callback wont be processed until we drop the RCU read lock?
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]