Re: Linux "NULL pointer dereferece" in the News...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Tom Horsley wrote:
On Sun, 19 Jul 2009 23:11:52 +0530
Rahul Sundaram wrote:

It is not so simple. This is not a compiler bug. I suggest you read
through http://lwn.net/Articles/341773/rss to understand why.

I did. It is a compiler bug no matter what a bunch of language lawyer
holier than thou compiler developers say :-).

They claim it is undefined by the standard and therefore they can
do whatever they want.

Speaking as a compiler developer and sometime language lawyer, you
seem to have gotten the wrong impression at several different levels.

Compilers determine what modifications they can make to the code using
the inferences they make based on the data flow through a program.  They
don't say "well, this is undefined, we can muck it up however we like".

In the case of this *kernel* bug, the compiler determined that the pointer
must be valid, because it had been previously dereferenced.  This allowed
the compiler to eliminate a test which would always be false, *as long
as the behavior of the previous code was defined*.

This code is correct and well defined as long as the pointer is valid.
The behavior only becomes undefined when the pointer is null.  The only
way that the compiler could determine that the initial dereference of
the pointer was undefined would be to insert a test for null before the
dereference.  It would have to do this for every piece of code which
dereferenced a pointer which was passed into a function, dramatically
impacting performance.  Or perhaps it could issue a warning message
saying that it couldn't determine that the pointer was valid, resulting
in a warning that would occur thousands of times when compiling the
kernel.

OK, what would reasonable, sane people do in that case? That's right,
they'd fall back on the behavior of just doing what the program source
code says, but no, gcc is too smart for that, gcc's undefined behavior
shows how smart it is and therefore makes much more sense than
doing the obvious :-).

You can get exactly that behavior by not optimizing your code.
Your code will run much slower, but that's OK, isn't it?

Ah, you want optimizations?  But you want them to magically decide that
one is an error in the program and shouldn't be done, while in another
place the same optimization should be done because it generates better
code.  OK, write a description of how to determine one case from the
other and I'm sure every compiler developer will rush to implement it.

--
Michael Eager	 eager@xxxxxxxxxxxx
1960 Park Blvd., Palo Alto, CA 94306  650-325-8077

--
fedora-list mailing list
fedora-list@xxxxxxxxxx
To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list
Guidelines: http://fedoraproject.org/wiki/Communicate/MailingListGuidelines

[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux