Re: os that rather uses the gpu?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 07/15/2010 03:44 PM, JD wrote:
>    On 07/15/2010 01:58 PM, Michael Miles wrote:
>    
>> On 07/15/2010 11:38 AM, JD wrote:
>>      
>>>      On 07/15/2010 11:23 AM, Michael Miles wrote:
>>>
>>>        
>>>> On 07/15/2010 12:18 AM, JD wrote:
>>>>
>>>>          
>>>>>        On 07/14/2010 11:41 PM, mike cloaked wrote:
>>>>>
>>>>>
>>>>>            
>>>>>> On Thu, Jul 15, 2010 at 5:27 AM, john wendel<jwendel10@xxxxxxxxxxx>       wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>>              
>>>>>>> Agreed that an OS kernel hasn't much use for a GPU. But it should be
>>>>>>> easy to add a small general purpose CPU (ARM or Intel Atom) and a couple
>>>>>>> of usb ports to the card and move X completely to the video card. Just
>>>>>>> like a remote X server only in the same box.
>>>>>>>
>>>>>>> I really think the OP was referring to having user mode code take
>>>>>>> advantage of the high processing power of modern GPUs. It works now, but
>>>>>>> could be improved if the OS contained specialized scheduling support for
>>>>>>> these kinds of jobs.
>>>>>>>
>>>>>>>
>>>>>>>                
>>>>>> I understand that the GPU has no page faults, and is missing many of
>>>>>> what we regard as the essential functions of a normal processor?  Also
>>>>>> getting large amounts of data in or out of the GPU is slow - it is
>>>>>> fast partly because there is a lot less overhead compared to a single
>>>>>> processor and partly from the advantage of multiple cores. I was
>>>>>> speaking to someone who has been working with GPU processing for
>>>>>> several years and was skeptical about getting code to run reliably
>>>>>> across different GPUs...  and of course CUDA is vendor specific as fa
>>>>>> as I know? So speed gain is dependent on the kind of processing needed
>>>>>> but if anything goes wrong then it can easily crash the system.
>>>>>>
>>>>>> Anyone had any experience with using the GPU could perhaps comment?
>>>>>>
>>>>>>
>>>>>>              
>>>>> Sorry to barge in this late into this thread....
>>>>> Was the originator of the thread interested in the kernel
>>>>> to use the gpu for floating point operations or integer
>>>>> operations?
>>>>> If floating point, the x86 (among others) already has an
>>>>> integrated fpu, and the integer logic is already in the cpu (or alu).
>>>>> So I do not understand what sort of computations the originator
>>>>> of the thread would like to see done on the gpu.
>>>>>
>>>>> jd
>>>>>
>>>>>
>>>>>            
>>>> The other OS's Mac and Windows are using the GPU in its  video
>>>> conversion programs.
>>>> The newer programs will have selections to activate the GPU for computation.
>>>>
>>>> I have been using the GPU for scientific computation for quite a while now.
>>>> Seti@home is very much a hobby and it takes samples from the areciebo
>>>> telescope and analyse data looking for "You guessed it, ET"
>>>> It will crunch numbers very fast compared to a normal CPU.
>>>>
>>>> I bench my Phenom 2 965 at 3 gflops/cpu   while the GPU will be doing 54
>>>> Gflops .
>>>>
>>>> I have a slow video card Nvidia 9400GT. The bigger ones will go right up
>>>> to  a full teraflop.
>>>> That kind of speed would be well accepted if an OS would use it
>>>> generally or software that is written for Video conversion to use it
>>>> greatly reducing time.
>>>>
>>>>
>>>> That's what I would like to see, more focus on speeding up video
>>>> conversion especially with HD video and it seems that the GPU is a very
>>>> inexpensive way to add a lot of power to your machines
>>>>
>>>>          
>>> A teraflop?? WHoa! Can the PCI  bus really feed the  gpu with
>>> an instruction stream that will yield that performance?
>>> I mean most pc's out there are in people's homes still pci (33 or 66 MHz
>>> bus).
>>> Relatively, fewer are on pci x16 which is a much faster bus.
>>>
>>> Thanks for your feedback.
>>>
>>>
>>>
>>>        
>> PCI-E only will handle that kind of speed.
>>
>> It's also good to have a video card with fast ram and lots of it.
>>
>> The new Fermi chips coming out and mindboggling fast, over a tflop
>>
>> So yes seeing them take over some of the bigger computations for video
>> conversion would be great.
>>
>> For instance the Nvidia 295 has 240 stream processors and is able to do
>> 1788.48 gflops.
>> Put two or three together in a sli config and you've got some power there
>>
>>
>> http://en.wikipedia.org/wiki/GeForce_200_Series
>>
>> This page will give you an idea of what these things do.
>>
>>
>> The 400 series uses Fermi and lets take the 480 gtx for instance, 480
>> stream processors
>> Now we are talking about 4+ tflops.
>>
>> That's getting quick
>>
>> As well Nvidia cards are well noted for over clocking.
>>
>> My little 9400 runs at 550 and shader runs at 1375.
>> I can push those up to 700 and 1800 shader and no artifacts and no problems.
>>
>>
>>
>>
>>
>> Michael
>>      
> This is the kind of technology that is needed to solve huge mathematical
> problems.
> Most smaller universities and colleges do not have massively parallel
> supercomputers that are
> vector processors. This could make it easy for such colleges and small
> university research
> projects to have some decent computational power.
> At the same time we have to be wary of the export laws for such devices
> because one thing
> that I had read some years back is that breaking cyphers is exactly what
> such powerful processors
> can be used for or adapted to be used for. For many years, this has been
> a tug of war between
> industry on one side, and the department of defense and the NSA on the
> other.
>
>    
I would like to see it implemented in Ray tracing.

I am a 3D Junkie and have been using Lightwave since the Amiga 2000 and 
the Video toaster.
I still use the Windows version of Lightwave and complex ray tracing 
still takes far too long using my Phenom 2 965.
Partly because the program is not using sse3 only sse2
I would like to send off work to a "Render farm (the GPU)" like we used 
to send it off to the Raptor ( Alpha series) for rendering.....


That would reduce time of say a 10 minute animation from years to weeks.
Even now with the fast speed of today's cpu's animations of that size 
and high resolutions take a very long time.
Using a GPU on these would be ideal.
It would take a savy programmer with some time and a reason to do so.

With the Cuda sdk 3.0 the tools are there, just not me as I have no clue 
when it comes to programming

The medical field and research field would benefit as some tasks for 
computation are not even attempted for the length of time it would take, 
so it does not even get done.
So if the same task was done on a machine using Cuda or Cal would really 
advance certain fields of technology.
Especially since most serious professional computers are not on Windows 
but Linux or Unix based systems it would beneficial to have the support 
for GPU computations built in to the OS itself




Michael

-- 
users mailing list
users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe or change subscription options:
https://admin.fedoraproject.org/mailman/listinfo/users
Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines


[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux