Re: OT: ISPs: Linux's role nowadays

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 26, 2010 at 01:41:11PM -0800, Rick Stevens wrote:
> On 02/26/2010 03:02 AM, Pasi Kärkkäinen wrote:
> > On Thu, Feb 25, 2010 at 03:27:53PM +0000, Michal wrote:
> >> On 25/02/2010 14:00, Chris Adams wrote:
> >>> Once upon a time, Marcel Rieux<m.z.rieux@xxxxxxxxx>  said:
> >>>> I was under the impression that, at most small ISPs, Linux had
> >>>> replaced Unix and played a central role in making things work. But
> >>>> today, I spoke to an ISP employee who told me that Linux was only used
> >>>> for Web servers and that, for routing and firewalling, nobody escaped
> >>>> companies Cisco and Juniper which provide "solutions" where part of
> >>>> the software has been integrated into hardware for efficiency
> >>>> purposes.
> >>>
> >>> Servers don't really make good routers.  When you are talking about
> >>> traditional low- to mid-speed telco circuits (T1, T3), there have never
> >>> been good, well-supported, cost-effective solutions for connecting those
> >>> directly to Linux systems for routing that could compete with a basic
> >>> Juniper or Cisco (or Adtran or ...) on price and ease of use.
> >>>
> >>> When you start talking about SONET links (OC-3 and up), Linux AFAIK
> >>> doesn't handle things like protected paths and the like, and then you
> >>> also quickly pass the performance capability of commodity hardware.
> >>> Newer WAN circuits are using Ethernet, but you need OAM (which Linux
> >>> doesn't support) to properly manage them as a replacement for
> >>> traditional telco circuits.
> >>>
> >>> "Real" routers (aka Juniper and Cisco) use hardware-based forwarding
> >>> that can run at line rate for 1G, 10G, and 100G interfaces.
> >>>
> >>> Dynamic routing has always been pretty weak in Linux as well.  I have a
> >>> few systems running Quagga for various purposes, but it is not nearly as
> >>> powerful and flexible as a "traditional" router.
> >>>
> >>> Now, Juniper routers all run FreeBSD, but that's only on the routing
> >>> engine (where the management and routing daemons run), not the
> >>> forwarding engine (where the actual packet forwarding takes place).
> >>> Juniper wrote all their own routing, PPP management, etc. daemons from
> >>> scratch.  It is kind of funny when you spend $100K+ on a router that has
> >>> a Celeron 850 CPU and a whopping 20G hard drive. :-)
> >>>
> >>> I have lots of Linux servers, a few other old Unix servers, and a couple
> >>> of Linux firewalls, but all my routers are Juniper.  I've been working
> >>> for small ISPs for 14 years, and I've never really seen a time where I
> >>> would try to push Linux into serious routing.  It costs too much on the
> >>> low end and can't handle the performance on the high end.
> >>>
> >>
> >> People have had great success with OpenBSD on firewalls and routers with
> >> lots of traffic and 10GB NIC's etc
> 
> So long as the firewall doesn't have to handle too many rules and the
> routing decisions are minimal.  At those traffic levels, the system
> would be swamped with interrupts anyway.  I think there's some serious
> measurement issues here.
> 
> > Yeah.. Linux also does OK on this front. Recently there has been reports
> > about pushing 70 - 80 Gbit/sec through a single desktop-class Linux box.
> > Yes, you read it correctly.
> 
> Well, THAT I don't buy.  I've not seen a 100Gbps or 1Tbps PCI-slot
> NIC.  I suppose you could put in an adequate number of 10Gbps NICs in a
> box...assuming you have enough slots, and I don't think the internal
> bus on any desktop is capable of moving that kind of data that fast.
> Not to mention the interrupt storm that'd ensue.
> 

See here:
http://groups.google.com/group/linux.kernel/browse_thread/thread/70e62d8a85cd3241

"We've achieved 70 Gbps aggregate unidirectional TCP performance from
one P6T6 based system to another.  We figured out in our case that
we were being limited by the interconnect between the Intel X58 and
Nvidia N200 chips.  The first 2 PCIe 2.0 slots are directly off the
Intel X58 and get the full 40 Gbps throughput from the dual-port
Myricom 10-GigE NICs we have installed in them.  But the other
3 PCIe 2.0 slots are on the Nvidia N200 chip, and I discovered
through googling that the link between the X58 and N200 chips
only operates at PCIe x16 _1.0_ speed, which limits the possible
aggregate throughput of the last 3 PCIe 2.0 slots to only 32 Gbps. "

and

"This used 4 dual-port Myricom 10-GigE NICs.  We also tested with
a fifth dual-port 10-GigE NIC, but the aggregate throughput stayed
at about 70 Gbps, due to the performance bottleneck between the
X58 and N200 chips."


> The reason there are things like Foundry and Cisco and Juniper is
> because much of the heavy lifting is done by bitslice engines and
> dedicated hardware, with a supervisor doling out the jobs and watching
> over the operation.
> 
> It's rather irrelevant what the supervisor is...Linux, BSD, OS/2,
> Plan9, Winblows, whatever.  The real grunt work is done by the
> dedicated chips.  This is one reason Cisco has been able to push IoS
> out to product lines they've acquired so fast.  It's easy to port.
> 
> When you ask a CISC to do the work that a RISC or bitslice does, you're
> going to get performance issues.
>

Yeah.. of course there's the need for those special ASICs, but the point
was the latest x86 hardware is capable of a LOT.

And here's the link for the 4.7 Mpps (million of packets per second)
benchmark:

http://permalink.gmane.org/gmane.linux.network/151396

-- Pasi

-- 
users mailing list
users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe or change subscription options:
https://admin.fedoraproject.org/mailman/listinfo/users
Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines


[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux