On Wed, 23 May 2007, Les wrote:
On Wed, 2007-05-23 at 18:45 -0500, Michael Hennebry wrote:
On Wed, 23 May 2007, Mike McCarty wrote:
Michael Hennebry wrote:
On Wed, 23 May 2007, George Arseneault wrote:
Now the bad news... C, C++, gnu, several variations on
the ISO; not to mention all the libraries, etc. And,
to top it off, some of the stuff in the book just
doesn't work. (A program to demonstrate the various
types of integer variables and how to display them
with printf(), failed to show any difference with any
arguments I could find.)
Should they have produced different results?
On big-endian machines, they can. For example, with two's complement
arithmetic on a big-endian machine,
printf("%d\n",-2);
does not result in
-2
It should.
printf, declared or not, will look for an int and get it.
printf("%u\n", -2);
is more interesting.
We might be in the domain of nasal demons.
printf("%u\n", (unsigned)-2);
Is legal, but rather obviously will not print "-2\n".
It will probably print something even regardless of endianness.
It will definitely print *something*. The question is, can you guarantee
what it will print.
It's not addressed directly in the FAQ, but I believe it's possible to
prove that (unsigned) -2 must be the two's complement representation of -2
in however many bits make up an int. I know there was some controversy
about that when the standard was being developed. In any case, I don't
know of any modern machine that doesn't represent negative integers in
two's complement.
If the print specifier and the value are different sizes, we are in the
realm of http://www.c-faq.com/expr/unswarn.html and
http://www.c-faq.com/expr/preservingrules.html.
Printing (int)sizeof(typename) will distinguish some types.
Note that short, int and long usually only have two distinct sizes.
It's allowed, but rare, for all the arithmetic types to have size 1.
Or for them all to have different sizes.
Note that what you suggest works because sizeof(.) for integer
types is going to be a small number. The only portable means
For small read <=16.
of displaying an unsigned integer of unknown size is
printf("Thing = %ul\n",(unsigned long int)Thing);
For "rare" read "no known implementation". Since long int
is required to be at least 32 bits, that would require
that char be at least 32 bits.
And double has to be more.
How do you get that? (Not saying you're wrong...)
My recollection is that there was a
Cray compiler that had 64-bit chars.
Anyone know for sure?
sizeof generally returns size in bytes or words (depending on how the
implementer read the spec) I have never seen it return words.
sizeof(char) == 1 is guaranteed by the standard. There's no reference to
"bytes", but it is commonly accepted that the char type is a byte. It's
possible to have chars that are not eight bits, but I can't think of a
modern machine that does that. There were some old machines (Honeywells?)
that had six-bit bytes and 36-bit words.
All this is based on my recollection of discussions in comp.lang.c and
comp.std.c when the standard was under development.
And the Cray stored 8 characters in 64 bits using ASCII coding a
LONG time ago. I had forgotten about that. I think that was the model
where you sat on the processing unit when you were at the console.
Regards,
Les H
--
Matthew Saltzman
Clemson University Math Sciences
mjs AT clemson DOT edu
http://www.math.clemson.edu/~mjs