Re: OT: Requesting C advice

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2007-05-24 at 05:06 -0500, Mike McCarty wrote:
> Les wrote:
> > 
> > Hi, George, 
> >     FIrst integers are now 32 bit in most compilers, and long integers
> > are still 32 bit in most as well.  Thus what you saw is probably a
> 
> [snip]
> 
> Define "most compilers". "Most" machines are 16 bit or less and sport
> compilers to match. How many microwave ovens have computers in them?
> How many automobiles ignition systems? How many use C?
> 
> Mike
It looks like you are trolling, Mike.  

    We were discussing C, so I saw no reason to indicate any thing else.
Most compilers therefore means most C compilers.  
We are also not discussing embedded applications.  I don't know what
Microsoft is doing these days and I care less.  Most general computing
machines are
PC's today, using Intel 80xxx processors, or AMD processors, and others
available are generally similar to those, running 32 or 64 bit internal
registers with the C 
compilers designed to the ansi standard.  Embedded applications today
are mostly 8 bit, but many, many designers have already begun the
transition to 16 bit, and soon 
will be moving to 32 bit.  The reasons are much the same as the reasons
that general computing has moved from 8 to 16 to 32 and now to 64, with
the cutting edge already 
looking at 128 bit and parallel processing, along with dedicated
processors running 32 or 64 bit floating point math.  Also the length of
the integer used in C, which is a virtual 
machine is independent of the word length of the processor, except the C
language designers (originally Kernigan and Ritchie) made the language
somewhat flexible to simplify migration.  That is why there were some
undefined situations in the original specification.  Remember that C is
a virtual machine language, whose processor only has 24 instructions (I
think the Ansi committee added a couple, but they have specific uses
that were not foreseen in the original usage of the language)  It can be
ported to any machine currently extant by only writing about 1K of
machine code, and even that can be done in another available higher
level language if you so desire, as long as it is compiled for
efficiency.

    That is why even the 8 bit implementations of C used a 16 bit
integer.  As to most machines, I think Intel and AMD together pretty
much consume that adjective, and if you add Sony, I am sure you have the
majority of the general computers running C.  Again we are not
discussing embedded applications here (most of which actually run Basic
or machine language anyway).

    As to your reference to calculators in response to Matthew, hardly
any of them run anything but 32 bit integer calculations, and 32 bit
floating point.  Most are manufactured in the pacific rim, and use C as
the design language.  A few use HP's RPN, but most use the stack based
operations common with C along with C's order of evaluation algorithm,
although recently I have been noticiing anomolies in the cheaper
versions where they do not take precedence into consideration.  I
believe these are based in China, where the use of the Abacas had no
requirement for precedance, and calculations took place as entered.  I
could be wrong, but that is my assumption for the differences I see in
the evaluation formula's.

    Years and years ago, Burroughs had a calculator library based upon
hexadecimal digits, where a decimal was stored in a nibble, and the
intel chips still contain an instruction (DAA) to deal with the ability
to reformat the accumulator to a digit per nibble to conform to that
standard (they will keep that instruction forever I think, because there
is  an algorithm out there that uses it to produce decimal digits in
ASCII for printing).  However I believe that format has fallen into
disuse because of speed and storage issues.

    As to the example for bigendian,
        printf ("%d\n",x); /* x is long */
    The format portion returns an error because the format is wrong.
The error is the integer -1.  That is what you see printed.  The correct
syntax to print a long integer is:
        printf ("%ld\n",x); /* x is long */
    And will print the same on all machines.  The error of -1 may be
different because different designers may use different error indexes, I
do not think the standard dictates the internal error message structure
of the compiler or error libraries.

    I think Matthew gave a means of examining memory that would tell you
if a machine was bigendian or little endian.  Generally the only time a
user will notice any difference is in algorithms that parse the stack,
since the data is put into the stack in reversed order on Bigendian.
There are other issues, but this was the one that really brought this
argument into flames in the 1970's.


Regards,
Les H


[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux