Re: Ethernet Channel Bonding

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 2003-11-25 at 19:16, Jim Christiansen wrote:
> Hello Everyone,
> 
> I've just joined the list and need to bite the bullet and tryout Fedora.  
> I've got about 100 RedHat9 boxes being served NIS and NFS'ed home from one 
> RH9 box.  The one main server also serves 30 thin-clients.
> 
> I have posted this to the K12LTSP list as well, so I know that a few of you 
> may recognize this mesage- sorry for repeats.
> 
> I want to increase the network capacity on the main server by adding another 
> nic to the same subnet.  I'm not a networking or Linux pro, but this short 
> article may do the trick for me.
> 

Well, let's sit back and think.
Gigabit Ethernet these days is pretty cheap.
How about pricing a 10/100 Ethernet switch with one Gigabit port for
your main network switch, then put a gigabit card in your server?
Many server motherboards these days come with gigabit NICs built in -
but of course you might not have that luxury.

I would seriously price this up before channel bonding.

Also, you say you are using NFS. Have you looked at the network traffic
in detail, using tools like Ethereal and ntop? Are you SURE the 100Mbps
link is the bottleneck?
NFS, especially with that number of clients, is probably your major
problem.
Have you tried some NFS tuning, using utilities like nfsstat?
I've done work in the past with performance on big NFS servers - you
might find that the problem is made better by increasing RAM so you
get much bigger buffer spaces.
How much RAM is in your server, may I ask?




[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux