Re: Enquiry,,,

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2007-02-02 at 17:44 -0800, Evan Klitzke wrote:
> On Fri, 2007-02-02 at 14:57 -0500, Dmitriy Kropivnitskiy wrote:
> > James Wilkinson wrote:
> > > It's usually a bit faster.
> > Just to avoid the confusion, are you saying that 64-bit capable
> > processors are faster than 32-bit only or that application compiled for
> > 64-bit architecture is faster then the same application compiled under
> > 32-bit architecture on the same hardware. The reply to your post tells
> > me that people think you mean the former, where I was talking about the
> > latter. I will not dispute the claim that 64-bit CPUs are faster then
> > 32-bit, cause I don't think they make 32-bit only CPUs anymore (at least
> > in the x86 architecture). So any 32-bit CPU will be just plain outdated
> > and therefore slower then any modern 64-bit (and 32-bit capable) CPU. As
> > for the applications, I believe the difference should be negligible
> > unless the application is trying to use a lot of RAM. I think I have
> > seen some benchmarks confirming this, but at the moment I cannot seem to
> > find them.
> 
> IIRC 64-bit architectures have more registers. This should make code
> compiled for a 64-bit processor a little bit faster than code compiled
> for a 32-bit processor, even if the application doesn't actually make
> use of quantities larger than 32 bits. I'm not sure how much a
> difference this actually makes in real world benchmarks, but it's
> something to think about.
> 
> -- Evan Klitzke
> 
Hi, Evan, and others,
	The extra registers are valuable if the code takes advantage of them.
This depends upon a lot of variables.  For instance a native C compiler
only uses about three registers.  That is because the code is optimized
for the C machine's 24 instructions.  However, if the compiler breaks
out the C code to C assembly, then calls a cross assmebler, followed by
an optimizing assembler, the results will occur one way, or if the C
code is compiled directly into native code as some compilers do, then
the optimizations will occur in a different order.  Moreover, most
compilers can optimize for space or speed, yielding different machine
instructions for the same code with the same compiler.  Also the code
author can instruct the compiler to utilize register variables, and in
that case, most optimizing compilers (but not all) will utilize all
available registers to ensure that can occur.  This will produce the
most noticable effects in tight loops, or loops within loops such as are
used for array processing programs or graphics processing.  

	So there are those things going on.  However if a 64 bit processor
running 64 bit memory is running 32 bit code, generally it will only
access memory about 2/3 as often, and since memory access is a "slow"
operation relative to processor speed, the system will seem faster.
Also 64 bit processors generically have twice as much high speed cache,
thus reducing the cache misses, another gain.  Finally, the segmentation
of double precision floating point numbers and long pointers required
for 32 bit operation doesn't have to occur for 64 bit operation, another
(although slight) speed gain.  Finally, the pipelines on modern 64 bit
processors have a few more capabilities in terms of "look ahead" and
"prefetch" for jumps and calls than is available on their 32 bit family
members.

So there are many things that affect the relative speed of the
processors, Registers, instruction sets, optimizations, look ahead,
branch prefetch, cache and memory depth just to name the most obvious.

Regards,
Les H


[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux