Re: os that rather uses the gpu?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jul 16, 2010 at 12:13 AM, Genes MailLists <lists@xxxxxxxxxxxx> wrote:

 I've seen 1-2 orders of magnitude speedups in real world usage (using
nvidia GPU's in otherwise high end but standard equipment) - was really
impressive - don't get caught up in technical minutia like whether a
flop is useful metric - using the GPU can be an enormous speed gain.
Test it for your use case.

 There are limitations - and moving things in and out can be costly -
however careful programming can most certainly lead to huge benefits -
that means cost savings for time critical usage ...

There are problems for which GPU's will be of tremendous benefit, just as there are problems for which the single precision arithmetic you are apparently using will be acceptable.  There are problems for which small bandwidth per flop is acceptable, but not for all, just as there are problems for which single-precision arithmetic is good enough, but not for all.   Most high-end technical computing (the really big problems) requires double precision arithmetic, for which the performance of the GPU's is significantly less impressive.

The claim to which I objected was sweeping.  The emphasis on flops has skewed computational physics in a way to which I take very strong exception.  I think I have made that clear enough.

The problems that were nearly impossible before Fermi will not be made any easier by the introduction of Fermi.

When the DoE advertises its latest mistake, what it will advertise is flops and use that to claim that they have the world's fastest or most powerful computer when, for some really important problems, most of that power is useless.  GPU computation will only intensify an already unfortunate trend.  To think of GPU's as any kind of answer to the most pressing needs of computational physics is just wrong.

In the late nineties, after some poorly thought-out purchases for the DoE, the Congress was horrified to discover that actual flops in practice were sometimes only 5% of peak flops on floating point intensive code.  After much hand wringing, lots of press releases, and the inevitable march of Moore's Law, many codes are still running in the 5-10% efficiency range.  GPU's are only going to make this trend worse.

Robert.
-- 
users mailing list
users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe or change subscription options:
https://admin.fedoraproject.org/mailman/listinfo/users
Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines

[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux