On Fri, 2010-07-23 at 15:58 -0600, Christopher A. Williams wrote: > > Look at the Cloud Harmony benchmarks for your favorite cloud provider here: > > > > http://blog.cloudharmony.com/2010/06/cloud-server-benchmarking-part-4-memory.html > > > > Consider that you can get (on paper) more throughput from an Isilon > > platform than you can from the largest EMC or Hitachi RAID based device. > > > > The IT world is moving fast, away from fixed hardware specifications, > > towards modular nodes that offer storage, cpu, or whatever. > > Indeed...! The change continues to accelerate as well. > However, the network you discussed doesn't fit what most of us have at home. I'm lucky to get 6Mb up and 1Mb down. I could go with a cable solution and get 5M and 5M, but that is a bunch of orders of magnitude below the network you specified the cloud you bench-marked. Moreover, the partitioning of data to utilize the other processors is not just a thread cast from most versions of Windows. That has to do with Windows low level design. It can be tricked into doing that, but there are other burdens that arise and have to be coded around in one form or another. And as to what you said about the local OS, I don't know if that is strictly true or not any more, or at least it certainly doesn't have to be. When the system has multiple processors, several could be the app machine and the other the OS, and the partitioning could be fixed or variable depending on the software design. Of course all the OS interface to system storage would have to be re-entrant and double buffered to accomplish that. But in essence I could implement a small cluster with a file server over a local 100 to 100G Ethernet and get that end of it sorted. If the server has multiple processors with individual SATA, and I have separate page swaps or hardware memory swaps, I have whipped the basic speed limits. As to parallel processing applications, that still depends on the application casting threads to utilize the additional processors, but with cloud infrastructure, the latency of the cloud becomes an issue. If the cloud is local, that is one color cat, and if the cloud is a central control use, that is another cat, and if the storage is distributed, another tail to both cats. More over, why on earth would I pay to run a computer on an hourly basis? This is simply a smokescreen to add costs to the consumer for big software and internet companies, and it is being fed via spoonfeeding to the colleges developing designs. Given equivalent connect speeds, a local network beats a distance network every time, until someone gets quantum links working. But if that works, we don't need the wires any more do we? I don't pretend to know what's coming, but I do know that when stuff is outside my house it can be priced annually, it can be arbitrarily withheld, it can be surveiled, it can be compromised by those who have direct physical access, and it is not something I really want or need. Just my honest opinion. -- users mailing list users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines