On 6/4/07, Jeff Dike <[email protected]> wrote:
First, there are signals. If the app has an interval timer enabled,
every thread will inherit it and you will have 32 threads getting
alarms, which seems surprising and wasteful.
Not only that. IIRC the current code does nothing about blocking
signals while the thread is not yet a real thread. If these worked
threads would receive a signal sent to the process (process group in
kernel-speak) this could severely irritate the application. The
signal mask of a new thread is inherited from the parent thread. If
the parent has it unset, the child will by default. Until and unless
the aio-created thread is used to deliver a notification because
SIGEV_THREAD is selected it must not appear doing any work for the
process.
This should be easy to fix: when creating a thread for aio, the kernel
should block all signals (probably including SIGSTOP etc). If and
when the thread is seen at userlevel the glibc code can then set a
reasonable signal mask if the thread is to be reused.
I can imagine other
signals (like SIGWINCH and SIGHUP) which might be surprising to
receive multiple times in an obstensibly single-threaded process.
Normal signals are sent to only one thread in the process. If a
thread cannot handle the signal it has to block it. The kernel will
then choose a different thread or keep it pending. Special signals
are delivered to all threads. SIGSTOP, for instance. There is a lot
of special code in the kernel to handle these cases and they should be
fine.
Second, security. What happens if a well-written server starts life
as root, does some (async) I/O, and setuids to a non-root uid? There
will be a bunch of async threads still running as root, with the
result that async operations (and the main thread) will more than
occassionally regain root privs.
This is indeed a problem and the reason is that we are forced to
implement setuid etc at userlevel. The code currently works by
sending a special signal to all threads in the process and the signal
handler issues the appropriate setuid etc call.
We would need to extend this implementation to all threads created via
AIO. This requires that we can send a signal to the syslet before it
has left the kernel and maybe before it's done with the request. The
reason is that the thread calling setuid etc has to wait until all
threads have been successfully changed. We cannot indefinitely delay
the thread just because of outstanding AIO requests.
My preferred solution would still be to implement setuid etc in the
kernel. I.e., have a new syscall setuid_pgrp or so.
You can fix this by going around to all the threads in the thread
group and changing their ->uid, but that seems somewhat kludgy.
That's what we do already and I couldn't agree more with you.
You
could also fix this by killing off the async threads on a setuid, and
make the app recreate the async threads under its new uid.
This cannot work reliably, some of the AIO might already have happened.
There's also ptrace, which (as I think Zach already mentioned) gets
along badly with syslets.
syslets should be invisible to ptrace until they enter userlevel.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]