>>>>> "Linus" == Linus Torvalds <[email protected]> writes:
Linus> On Sun, 6 Nov 2005, Linus Torvalds wrote:
>>
>> And no standard hardware allows you to do that in hw, so we'd end up doing
>> a software page table walk for it (or, more likely, we'd have to make
>> "struct page" bigger).
>>
>> You could do it today, although at a pretty high cost. And you'd have to
>> forget about supporting any hardware that really wants contiguous memory
>> for DMA (sound cards etc). It just isn't worth it.
Linus> Btw, in case it wasn't clear: the cost of these kinds of things
Linus> in the kernel is usually not so much the actual "lookup"
Linus> (whether with hw assist or with another field in the "struct
Linus> page").
Linus> The biggest cost of almost everything in the kernel these days
Linus> is the extra code-footprint of yet another abstraction, and the
Linus> locking cost.
Linus> For example, the real cost of the highmem mapping seems to be
Linus> almost _all_ in the locking. It also makes some code-paths more
Linus> complex, so it's yet another I$ fill for the kernel.
This to me raises the interesting question of what are the most wanted
new features of CPUs and their chipsets by the Linux developers? I
know there are different problem spaces, such as embedded where
power/cost is king, to user desktops to big big clusters.
Has any vendor come close to the ideal CPU architecture for an OS? I
would assume that you'd want:
1. large address space, 64 bits
2. large IO space, 64 bits
3. high memory/io bandwidth
4. efficient locking primitives?
- keep some registers for locking only?
5. efficient memory bandwidth?
6. simple setup where you don't need so much legacy cruft?
7. clean CPU design? RISC? Is CISC king again?
8. Variable page sizes?
- how does this affect TLB?
- how do you change sizes in a program?
9. SMP or hyper-threading or multi-cores?
10. PCI (and it's flavors) addressing/DMA support?
With the growth in data versus instructions these days, does it make
sense to have memory split into D/I sections? Or is it better to just
have a completely flat memory model and let the OS do any splitting it
wants?
Heck, I don't know. I'm just interested in where
Linus/Alan/Andrew/et all think that the low level system design should
think about moving towards since it will make things simpler/faster at
the OS level. I'm completely ignoring the application level since
it's ideally not going to change much... really.
To me, it seems that some sort of efficient low level locking
primitives that work well in any of UP/SMP/NUMA environments would be
key. Just looking at all the fine grain locking people are adding to
the kernel to get around all the issues of the BKL over the years.
Of course making memory faster would be nice too...
I know, it's all out of left field, but it would be interesting to see
what people thought. I honestly wonder if Intel, AMD, PowerPC, Sun
really try to work from the top down when designing their chips, or
more from "this is where we are, how can we speed up what we've got?"
type of view?
Thanks,
John
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]