Chris Schumann wrote:
Late reply; sorry.
Date: Thu, 24 May 2007 08:43:26 -0700
From: Les <hlhowell@xxxxxxxxxxx>
Embedded applications today
are mostly 8 bit, but many, many designers have already begun the
transition to 16 bit, and soon
will be moving to 32 bit. The reasons are much the same as
the reasons
that general computing has moved from 8 to 16 to 32 and now
to 64, with
the cutting edge already
looking at 128 bit and parallel processing, along with dedicated
processors running 32 or 64 bit floating point math. Also
the length of
the integer used in C, which is a virtual
machine is independent of the word length of the processor,
except the C
language designers (originally Kernigan and Ritchie) made the language
somewhat flexible to simplify migration. That is why there were some
undefined situations in the original specification. Remember
that C is
a virtual machine language, whose processor only has 24
instructions (I
think the Ansi committee added a couple, but they have specific uses
that were not foreseen in the original usage of the language)
It can be
ported to any machine currently extant by only writing about 1K of
machine code, and even that can be done in another available higher
level language if you so desire, as long as it is compiled for
efficiency.
Having used C since the original K&R version, I have to ask WHAT?!?
Since when is C a virtual machine language?
I believe that the reference is to this language from the Standard:
5.1.2.3 Program execution
[#1] The semantic descriptions in this International
Standard describe the behavior of an abstract machine in
which issues of optimization are irrelevant.
[snip]
(and because it is a virtual machine language...)
That is why even the 8 bit implementations of C used a 16 bit
integer.
No it's not. They used 16 bit integers because you can't do much
They used 16 bit integers because the original compiler was for
the PDP series of machines. The architecture of that machine
influenced several aspects of the language. As the compiler evolved,
some of the architectural aspects of that were removed, but
not all by any means.
of anything useful with only 8 bit integers. The compiler designers
for those systems (like the Apple II) had to work around the 8 bit
registers. Looking at the assembly-language source for some of the
libraries was not pleasant.
The argument goes the wrong way. The reason the PDP series used
16 bit integers is because not much can be done with 8 bit
integers. This is what influenced the compiler.
Mike
--
p="p=%c%s%c;main(){printf(p,34,p,34);}";main(){printf(p,34,p,34);}
Oppose globalization and One World Governments like the UN.
This message made from 100% recycled bits.
You have found the bank of Larn.
I can explain it for you, but I can't understand it for you.
I speak only for myself, and I am unanimous in that!