Re: man 3 switch

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2009-11-16 at 14:06 -0800, Rick Stevens wrote:
> On 11/16/2009 01:06 PM, Steven W. Orr wrote:
> > On 11/16/09 13:54, quoth Rick Stevens:
> >> On 11/14/2009 01:55 PM, Frank Cox wrote:
> >>> On Sat, 14 Nov 2009 14:50:57 -0500
> >>> Steven W. Orr wrote:
> >>>
> >>>> There's nothing wrong with perl having all kinds of perldoc pages.
> >>>> But perl
> >>>> comes from one place. C, OTOH could come from lots of places besides
> >>>> FSF and
> >>>> the switch statement in gcc may not be exactly the same as the switch
> >>>> statement in some other dialect.
> >>>
> >>> As C is an ISO standard, I sincerely doubt there would be any
> >>> difference in the
> >>> syntax and behaviour of the keywords between C compilers on any Unix-like
> >>> operating system.
> >>
> >> Incorrect.  C, for example, does not guarantee the order of evaluation
> >> of arithmetic operators of equal precedence in the same statement (in
> >> other words, is something like "a + b + c" evaluated left to right, or
> >> right to left?).  This can have significant effects if some of the
> >> operands have "side effects"
> >>
> >> Another example is that a null pointer (or the value "NUL") is not
> >> necessarily zero, only that it is guaranteed to not point at any valid
> >> datum.
> >>
> >> C allows quite a bit of leeway to the compiler implementation.
> >
> >
> > I think I disagree on this one. We jumped from standardization of keywords to
> > how operators perform. I quote from page 53 of K&R: Table of Precedence and
> > Associativity of Operators: a + b + c *always* goes from left to right. K&R is
> > not the standard, but does the standard say otherwise? Lots of things are up
> > to the compiler writer, but I'd be surprised if this was one of them. Sometime
> > people worry about things like
> >
> > a++ + b++ + c++
> >
> > but even there, the precedence and the associativity are defined. In this case
> >
> > a++ + b++ + c++
> >
> > becomes (in psuedo stack code):
> >
> > a++
> > b++
> > c++
> > a b +
> > c +
> >
> > because binary + is lower precedence than ++.
> >
> > No?
> 
> I probably should have added "...subject to standard rules of 
> arithmetic".  AFAIK, there's no guarantee to the order that any given
> operand to an arithmetic operator will be evaluated if they're at the
> same level of precedence.  In the examples you give, the binding of "++"
> is higher than addition so that's unambiguous. But if a, b or c are, 
> say, pointers that depend on the values of one of the others, then side
> effects can be dangerous (the classic example of that is "getc(3)").
> 
> What I was trying to get across is, while the library is pretty 
> standardized (with the exceptions usually called out in the man pages),
> there are some things that different compilers will do differently and
> one should take reasonable care to make sure to use order enforcing
> syntax to make things work properly.
> 
> However even that is off the point of this thread.  The C _library_
> is documented, but that's because it is the underpinnings of many of
> the high-level languages used (C, C++, Fortran and most of the others).
> The languages themselves normally aren't.  The perl, python and several
> other groups do create man pages for their languages (which can be
> massive) and that's a nice thing, but IMHO it's somewhat inappropriate.
> If you need to know C, get a book.  Ditto C++ or python or other high-
> level language.  Don't rely on man pages.
> 
> I'm not even sure I like all the stuff you get with "man bash", but you 
> all know that I'm a curmudgeon.  :-)

Given that I have seen all kinds of effects in C code, but generally
from compilers that are not "ISO standard" what ever that really means
(ISO is not cast in stone by any means either when it comes to almost
all things).
	
	But some things have to work for programs to compile and run across
platforms.
	
	The basic copy function which is in most text books:
		*a++=*b++;
	for example, but I have seen compilers break this bit of code.
		The effects run from:
	*++a = *b++
	*++a = *++b
	*a++ = *++b
	
	I have seen some folks attempt to fix it with parens:
	*(a++)=*(b++);  but that is probably predictable, but would mean that
the pointers would have to be set artificially prior to the call to
point one location earlier, and the exit condition would be pointing
after the last character.

	Some other interesting artifacts I have come across deals with macro
substitution between debug code and release code, where macros may be
substituted for ease of tracing during the debug phase, but the code in
the macro is subtly different from the code in the routine used for
tracing execution.  Works at Debug but not release, or visa versa
depending on the conditions of the two implementations.

	In short, I agree with Rick that you need a book, not just on C, but on
the version you have.  

	Floats are another source of errors, due to some using hexadecimal
notation, and others using binary and still others some form of excess
twos notation.  Between them round off and truncation errors can occur
at intervals of 38, 72 and 134 if I remember correctly, at least
something like these intervals where the rounding function will produce
one number in one implementation and a different number in another.  I
have had people argue with me about my code being wrong, even after I
would write a test case and show them that the problem wasn't the code,
but rather the choice of compiler and its resultant choice of floating
point implementation.

	It gets more arcane with chained calculations.  Each time you multiply
two numbers together, there is an associated rounding error.  But it is
generally quite small, and in most processors moved out a few bits at
the stated resolution.  However a write to memory forces the truncation
at the stated accuracy, and then it begins to affect really complex
algorithms, such as the FFT or some arc calculation programs that use
squares and square roots.  Or in some graphics calculations where the
round off and truncation errors lead to additional distortion in the
periphery of the visual field due to the combination of those errors and
the "planarity (sp?) error" of presenting a spherical result on a flat
plane. i.e. a lens product on a monitor.  Maybe the photons particles
don't perform that task well either.

But how would you categorize all these different effects, provide
examples, show the results in the several cases and also hopefully add
some guidance (such as man pages do) for the combinatorial effects?


Regards,
Les H

-- 
fedora-list mailing list
fedora-list@xxxxxxxxxx
To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list
Guidelines: http://fedoraproject.org/wiki/Communicate/MailingListGuidelines

[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux