Re: sparse -Wptr-subtraction-blows: still needed?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, May 01, 2007 at 05:24:54PM -0700, Linus Torvalds wrote:
> The only thing that was gcc-specific about it is that gcc goes to some 
> extreme lengths to turn the constant-sized division into a sequence of 
> shifts/multiples/subtracts, and can often turn a division into something 
> like ten cheaper operations instead.
> 
> But that optimization was also what made gcc take such a long time if the 
> constant division is very common (ie a header file with an inline 
> function, whether that function is actually _used_ or not apparently 
> didn't matter to gcc)

There used to be a Dumb Algorithm(tm) in there - basically, gcc went through
all decompositions of constant it was multiplying at and it did it in the
dumbest way possible (== without keeping track of the things like "oh, we'd
already looked at the best way to multiply by 69, no need to redo that
again and again)").  _That_ is what blew the build time to hell for a while.
And yes, that particular idiocy got fixed (they added a moderately-sized
LRU of triplets <constant, price, best way to multiply on it>, which killed
recalculation from scratch for each pointer subtraction and got complexity
of dealing with individual subtraction into tolerable range).

That's separate from runtime issues, of course.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux