Welcome!

Apache Authors: Michael Bushong, Eric Brown, Jayaram Krishnaswamy, Yeshim Deniz, Bob Gourley

Blog Feed Post

Building a Plexxi Network

[This post is penned by Plexxi executive Bill Koss]

Two months ago I read an interesting blog post by Greg Ferro titled “Cheap Network Equipment Makes a Better Data Centre.”  At Plexxi, I am the person who leads our sales team and I found the post interesting because I think there is a lot of disinformation in the market regarding the cost of procuring networks.  Every week it seems I read a report or article on networking margins, port prices, market share, SDN, bare metal, leaf/spine, open source, linux, etc.  When I speak to customers, there is an equal amount of confusion and in my view it perpetuates the current state in favor of the incumbents.

As a level set, when a potential client asks: what does Plexxi do?  I tell them we build ethernet switches based on merchant silicon; we use Linux for an operating system, we use a photonic inter-connect fabric and distributed controller architecture.  Our switches speak ethernet and IP, just like all ethernet switches, but it is the controller and photonic fabric that make our switches different.  Together we believe that our technology results in a system that has transformative scale, performance, and efficiency advantages compared to legacy network architectures.

wrk-blog-pic1

Recently we provided a proposal to a potential client regarding their network.  As with Greg Ferro’s project, we provided an all in proposal that included cables, software, transceivers, switches, controllers, accessories, support costs and the fabric interconnect ports.  There were two design options based around the size of the interconnect fabric, which most people refer to as the spine or core.   I have condensed some of the details for reading ease, but I think this is a transparent description of the proposal regarding (i) the costs of building a Plexxi network and (ii) how it is different with the choice of two different fabric options, much like the blog post from Greg.  Here is the summary table of the network design options:

# 10G Client Ports $ Per 10G Client Port Fabric Size

OSR

Total 1 Year Cost

288

$1,223 2.88 Tbps

1:1

$384,950

1152

$1,223 11.52 Tbps

1:1

$1,539,800

2304

$1,223 23.04 Tbps

1:1

$3,079,600

432

$517 1.44 Tbps

3:1

$243,160

1728

$517 5.76 Tbps

3:1

$972,640

3456

$517 11.52 Tbps

3:1

$1,945,280

In a Plexxi network, we use a controller architecture.  Our controller computes efficient photonic forwarding topologies.  This type of architecture, often referred to as SDN, provides a number of benefits, but it begins with a fluid pool of capacity within the fabric.  The capacity in our photonic fabric can be allocated, reassigned and reconfigured.  Without providing a long technical description of how our controller operates, an important concept to understand is we use 100% of the fabric and we do not implement spanning tree or block links.  We compute forwarding topologies using multi-flow commodity, graph theory math; that is one of the jobs of Plexxi Control.  A Plexxi fabric can be used inside your data center and the use of a photonic fabric means the fabric can be extended to campus and metro area designs.  Here are a few points regarding our controller and fabric architecture:

  • The photonic fabric is managed as pool of capacity
  • Applications and workloads that are important can be assigned fabric capacity
  • Capacities can change, evolve and forwarding topologies can be diurnal
  • Controller based fabrics are deterministic as opposed to distributed state fabrics that take time to compute state, which may or may not result in an optimal design.
  • Controller based fabrics can centrally compute optimal paths and provide fast convergence by pre-computing failure recovery states

The use of a controller with a photonic fabric provides a number of scaling benefits.  The most obvious benefit is the linear cost curves in terms of price per client port.  The following chart shows the 10G client port cost in scaling from 400 to 3400 ports.

wrk-blog-pic2

The linearity of the Plexxi architecture in terms of client port cost can also be seen in terms of power and cooling.  In a Plexxi architecture, the performance of the network benefits when the controller tries to keep packets in the photonic portion of the network as much as possible, thus limiting silicon switch hops and incurring latency.  Uniformed latency and uniformed power consumption per client power are benefits of the Plexxi architecture:

wrk-blog-pic3

A question we often get is whether a Plexxi network requires a greenfield or can it be deployed in a brownfield.  The answer is there are no greenfields.  Plexxi networks have been deployed in a number of variations.  We have had clients deploy Plexxi in the spine, while leaving the legacy ToR and server connections in place.  We have had clients deploy Plexxi as a replacement strategy for the their leaf/spine network, thus collapsing a two tier or three tier network to a single tier.  We have had clients deploy Plexxi between data centers providing a single hop, load-balanced fabric between data centers.  Traditionally, most Plexxi customers connect our switches to legacy routers and switches using 10G or 40G ports.  We have a handful of customers who have extended the Plexxi fabric via DWDM connections to legacy optical transport platforms.

Another question I have been asked is whether or not our photonic fabric is proprietary.  The way to think about our photonic fabric is to compare it to the fabric modules found in traditional core and spine switches.  What we have done at Plexxi is to take the backplane capacity in spine/core switches found in multi-tier networks and distribute that capacity into each switch.  When you add port capacity, you add fabric capacity.  Five years ago this type of network design was not possible, only with the advent of the modern controller architecture coupled with low cost, multi-path photonic interconnects has it become possible.   The design objective of a Plexxi network is to manage the network as a resource pool that can be correlated with the needs of compute and storage.   We believe that networking is entering the era of plenty and that networks built with rich amounts of path diversity are building blocks of the new networks.  We believe this because that has been direction of compute and storage.  Compute and storage have entered the era of plenty and it is time for the network to enter the era of plenty as well.  The era of plenty for networking will be built using a controller architecture because a controller architecture combined with photonics, merchant silicon and Linux, is the best means to deliver the following benefits:

  • Simplicity: Single tier photonic network
  • High Utilization: Load balanced L2 fabric
  • Controller Architecture: Unified view of network fabric
  • Uniformed Low Latency: Direct connectivity
  • Faster Failure Handling: Pre-computed forward topologies that converge rapidly to target optimum
  • Elastic Network Capacity: Large-scale computation and path optimization through Controller enables fluidity of network capacity
  • Reduced Cabling: Simplified network element deployment and insertion

/wrk

[Today's fun fact: The average American/Canadian drinks about 600 sodas a year. Of course, the American version is 72 ounces compared to Canada's 12 ounces.]

The post Building a Plexxi Network appeared first on Plexxi.

Read the original blog entry...

More Stories By Bill Koss

Bill Koss is VP of Sales at Plexxi. Sales is about having a keen understanding of customer needs and matching them to appropriate solutions so that everyone benefits. The best salespeople are fluent in both business and technology, translating product into value. Vice President of Sales he has acquired these skills over a 20-year career in networking and information technology.

Plexxi is Bill’s fifth venture-backed startup, having successfully exited CrossComm, SDL, Sonoma Systems and Internet Photonics. Notably, Ciena acquired Internet Photonics in 2004 to include the CN4200 platform, which became a $1B+ product line. While with Ciena, Bill held a variety of roles including VP of North American Sales and VP of Global Partners. During Bill’s tenure at Ciena, annual revenue grew from $290M to $700M.