Hugh Caley wrote:
We're running FC2 on a couple of servers connected to a Nexsan atabeast. NFS access to the servers is great, until our 200+ node cluster fires up and swamps us. I was wondering what people have used as solutions for this problem. I don't really have to serve the cluster much faster, but I do have to keep non-cluster interactive unix sessions going in decent real time. Could I do something like use packet filtering and queueing on the NFS servers to make certain that packets from machines other than the cluster get first priority? The cluster nodes are on designated subnets; could I have the server put everything else first, or maybe reserve 3/4 of the bandwidth for the cluster so that other things don't slow down too much? Could I do this sort of thing on the servers themselves, or would there have to be another machine in front of them? We are currently running 256 nfsds on each server, which maybe helped a little bit over the default 8 nfsds. Hugh
There are all kinds of things you can do here. I would put the servers and the cluster onto a switch or something, and run that into a separate NIC (network interface card) on your servers, and have a 2nd NIC for your other users, for starters. Also, I would dedicate a separate IDE/SCSI/whatever controller for the NFS drives; can you separate the cluster's NFS directories from the regular users?
-- ------------------------------------------------------------------------------- "The two most common things in the | Bill Perkins universe are Hydrogen and Stupidity." | perk@xxxxxxx | programmer-at-large F. Zappa | ALL assembly languages done here. -------------------------------------------------------------------------------