BTW, unless you have a patch or something to propose, let's take this
off-list, it's getting kind of OT now.
On Jan 31, 2006, at 10:56, Al Boldi wrote:
Kyle Moffett wrote:
Is it necessarily faulty? It seems to me that the current way
works pretty well so far, and unless you can prove a really strong
point the other way, there's no point in changing. You have to
remember that change introduces bugs which then have to be located
and removed again, so change is not necessarily cheap.
Faulty, because we are currently running a legacy solution to
workaround an 8,16,(32) arch bits address space limitation, which
does not exist in 64bits+ archs for most purposes.
There are a lot of reasons for paging, only _one_ of them is/was to
deal with too-small address spaces. Other reasons are that sometimes
you really _do_ want a nonlinear mapping of data/files/libs/etc. It
also allows easy remapping of IO space or video RAM into application
address spaces, etc. If you have a direct linear mapping from
storage into RAM, common non-linear mappings become _extremely_
complex and CPU-intensive.
Besides, you never did address the issue of large changes causing
large bugs. Any large change needs to have advantages proportional
to the bugs it will cause, and you have not yet proven this case.
Trying to defend the current way would be similar to rejecting the
move from 16bit to 32bit. Do you remember that time? One of the
arguments used was: the current way works pretty well so far.
Arbitrary analogies do not prove things. Can you cite examples that
clearly indicate how paged-memory is to direct-linear-mapping as 16-
bit processors are to 32-bit processors?
There is a lot to gain, for one there is no more swapping w/ all
its related side-effects.
This is *NOT* true. When you have more data than RAM, you have to
put data on disk, which means swapping, regardless of the method in
which it is done.
You're dealing with memory only. You can also run your fs inside
memory, like tmpfs, which is definitely faster.
Not on Linux. We have a whole unique dcache system precisely so that
a frequently accessed filesystem _is_ as fast as tmpfs (Unless you're
writing and syncing a lot, in which case you still need to wait for
disk hardware to commit data).
And there may be lots of other advantages, due to the simplified
architecture applied.
Can you describe in detail your "simplified architecture"?? I can't
see any significant complexity advantages over the standard paging
model that Linux has.
Why would you think that the shortest path between two points is
complicated, when you have the ability to fly?
Bad analogy.
If you didn't understand it's meaning. The shortest path meaning
accessing hw w/o running workarounds; using 64bits+ to fly over
past limitations.
This makes *NO* technical sense and is uselessly vague. Applying
vague indirect analogies to technical topics is a fruitless
endeavor. Please provide technical points and reasons why it _is_
indead shorter/better/faster, and then you can still leave out the
analogy because the technical argument is sufficient.
But unless the stumbling block since 1980 has been that it was too
hard to get/make a CPU with a 64 bit address space, I don't see
what's different today.
You are hitting the nail right on it's head here. Nothing moves the
masses like mass-production.
Uhh, no, you misread his argument: If there were other reasons that
this was not done in the past than lack of 64-bit CPUS, then this is
probably still not practical/feasible/desirable.
Uhh?
The point here is: Even if there were 64bit archs available in the
past, this did not mean that moving into native 64bits would be
commercially viable, due to its unavailability on the mass-market.
Are you even reading these messages?
1) IF the ONLY reason this was not done before is that 64-bit archs
were hard to get, then you are right.
2) IF there were OTHER reasons, then you are not correct.
This is the argument. You keep discussing how 64-bit archs were not
easily available before and are now, and I AGREE, but that is NOT
RELEVANT to the point he made. Can you prove that there are no other
disadvantages to a linear-mapped model?
Cheers,
Kyle Moffett
--
Q: Why do programmers confuse Halloween and Christmas?
A: Because OCT 31 == DEC 25.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]