On Sun, 11 Jun 2006, Neil Brown wrote:
>
> I'm wondering what all this has to say about general principles of
> sub-project development with the Linux kernel.
Yes. That's an interesting and relevant tangent.
> Due to the (quite appropriate) lack of a stable API for kernel
> modules, it isn't really practical (and definitely isn't encouraged)
> to distribute kernel-modules separately. This seems to suggest that
> if we want a 'stable' and a 'devel' branch of a project, both branches
> need to be distributed as part of the same kernel tree.
>
> Apart from ext2/3 - and maybe reiserfs - there doesn't seem to be much
> evidence of this happening. Why is that?
I think part of it is "expense". It's pretty expensive to maintain on a
bigger scale. For example, you mention "-mm", and there's no question that
it's _very_ expensive to do that (ie you basically need a very respected
person who must be spending a fair amount of effort and time on it).
Even in this case, I think a large argument has been that ext3 itself
isn't getting a lot of active development outside of the suggested ext4
effort, so the "expense" there is literally just the copying of the files.
That works ok for a filesystem every once in a while, but it wouldn't
scale to _everybody_ doing it often.
Also, in order for it to work at all, it obviously needs to be a part of
the kernel that -can- be duplicated. That pretty much means "filesystem"
or "device driver". Other parts aren't as amenable to having multiple
concurrent versions going on at the same time (although it clearly does
happen: look at the IO schedulers, where a large reason for the pluggable
IO scheduler model was to allow multiple independent schedulers exactly so
that people _could_ do different ones in parallel).
People have obviously suggested pluggable CPU schedulers too, and even
more radically pluggable VM modules (not that long ago).
> It seems a bit rough to insist that the ext-fs fork every so-often,
> but not impose similar requirements on other sections of code.
Well, as mentioned, it's actually quite common in drivers. It's clearly
not the _main_ development model, but it's happened several times in
almost every single driver subsystem (ie SCSI drivers, video drivers,
network drivers, USB, IDE, have _all_ seen "duplicated" drivers where
somebody just decided to do things differently, and rather than extend an
existing driver, do an alternate one).
So it's not like this is _exceptional_. It happens all the time. It
obviously happens less than normal development (we couldn't fork things
every time something changes), but it's not unheard of, or even rare.
> So: what would you (collectively) suggest should be the policy for
> managing substantial innovation within Linux subsystems? And how
> broadly should it be applied?
I think the interesting point is how we're moving away from the "global
development" model (ie everything breaks at the same time between 2.4.x
and 2.6.x), and how the fact that we're trying to maintain a more stable
situation may well mean that we'll see more of the "local development"
model where a specific subsystem goes through a development series, but
where stability requirements mean that we must not allow it to disturb
existing users.
And even more interestingly (at least to me), the question might become
one of "how does that affect the tools and build and configuration
infrastructure", and just the general flow of development.
I don't think one or two filesystems (and a few drivers) splitting is
anythign new, but if this ends up becoming _more_ common, maybe that
implies a new model entirely..
Linus
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]