Welcome!

Apache Authors: Liz McMillan, Elizabeth White, Pat Romanski, John Mertic, Janakiram MSV

Blog Feed Post

What networking can learn from CPUs

The rapid growth in compute demand is well understood. To keep up with accelerating requirements, CPUs have gone through a massive transformation over the years. Starting with relatively low-capacity CPUs, the expansion of capability to what is available today has certainly been remarkable – enough to satisfy even Gordon Moore. But keeping up with demand was not a matter of simply making bigger and faster chips. To get more capacity, we actually went smaller.

As it turns out, there are practical limitations to just scaling things larger. To get more capacity out of individual CPUs, we went from large single cores to multi-core processors. This obviously required a change in applications to take advantage of multiple cores. The result is a distributed architecture and the proliferation of “scale out” as a buzzword in our industry.

From an application perspective, the trend continues. Applications that require performance continue to move to multi-tiered applications that are distributed across a number of VMs. This is true for massive web-scale applications like Facebook, but also for other applications like MapReduce.

To get bigger, we get smaller

The technology trend is clear: to get more output, move to smaller blocks of capacity, and coordinate workloads across that capacity.

If this is true, then the future will be lots of small pools of resources that rely on the network for interconnectivity. As applications become more distributed, then performance between these pools becomes even more critical. Even small amounts of pool-to-pool latency can aggregate up into significant impacts, either because of interesting failure conditions with asynchronous operations or because of the cumulative performance impact.

As interconnectivity takes a larger role, we should expect the discussion of commoditization of network resources to expand. Today, there is a strong argument around commoditizing the switch hardware (largely via merchant silicon) and the switch operating system (through players like Cumulus, Big Switch, and Pica8). But massive distribution will require both a commoditized interconnect and a commoditized orchestration platform.

On the latter, it would seem that OpenDaylight is poised to lead the charge. With an industry-backed open source solution, it will be difficult to justify premium control products, which should be sufficient in driving that aspect of the solution towards commodity. But that still leaves the interconnect piece unaccounted for.

Getting to a cheaper interconnect

There is probably a case to be made for leaf-spine architectures here, but if the number of servers continues to expand, there are some ugly economics at play. Scaling out in a leaf-spine architecture requires scaling up at the same time. As the interconnect demands increase, the number of spine switches increases. You eventually get into spines of spines, which starts to look an awful like like traditional three-tier architectures.

The sheer number of devices and cables drive the cost unfavorably. And when you consider the long-term operational costs tied to power, cooling, space, and management, it’s unclear where the budgetary breaking point is. Beyond just the costs, the other issue here is that every time a new layer is added, you add a couple of more fabric switch hops. If application performance is based on both capacity and latency, then every time you add switch hops, you incur a potentially heavy performance penalty.

At some point, you need to move away from multi-hop connectivity through the fabric.

Moving away from multi-hop fabrics

Instinctively, we already know this. There is already a tendency to rack gear up in close proximity to other gear to which it is tied. You might, for example, balance Hadoop loads across a number of servers that are in the same rack. Essentially, what we are doing in these cases is acknowledging that proximity matters, and we are statically designing for it.

But what happens when things aren’t static?

In a datacenter where applications are portable across servers, the network capacity cannot be statically planned. And as application requirements change (often dynamically as load changes), then the network capacity demands will also change. This requires an interconnect that is both high in capacity and dynamic.

This problem is slightly different than the compute problem. On the compute side, it was enough to free up resources (or create additional ones) and then move the application to the resource. In this case, the application is fixed, which means the capacity has to move to the application. When capacity is statically allocated, this poses a problem.

The bottom line

The only solutions here are to either over provision everything, or move towards a dynamic interconnect. The first is counter to the trends we learn from compute – make things smaller and more distributed. In this case you get out of the problem by paying for it. The question is whether this flies in the face of all the commoditization trends. What good is commoditizing something if the end solution requires buying a ton more? You would have to see cost declines match capacity increases, but this seems unlikely as there is no upper limit for capacity whereas cost will asymptotically approach some profit threshold.

If the trends in compute and storage hold true for networking, then the current trajectory of some networking solutions will need to change. Learning from the past is a great way to shape the future.

[Today’s fun fact: Lobster was one of the main entrees at the first Thanksgiving dinner. They also had Cheddar Bay Biscuits I think.]

The post What networking can learn from CPUs appeared first on Plexxi.

Read the original blog entry...

More Stories By Michael Bushong

The best marketing efforts leverage deep technology understanding with a highly-approachable means of communicating. Plexxi's Vice President of Marketing Michael Bushong has acquired these skills having spent 12 years at Juniper Networks where he led product management, product strategy and product marketing organizations for Juniper's flagship operating system, Junos. Michael spent the last several years at Juniper leading their SDN efforts across both service provider and enterprise markets. Prior to Juniper, Michael spent time at database supplier Sybase, and ASIC design tool companies Synopsis and Magma Design Automation. Michael's undergraduate work at the University of California Berkeley in advanced fluid mechanics and heat transfer lend new meaning to the marketing phrase "This isn't rocket science."

@ThingsExpo Stories
"ReadyTalk is an audio and web video conferencing provider. We've really come to embrace WebRTC as the platform for our future of technology," explained Dan Cunningham, CTO of ReadyTalk, in this SYS-CON.tv interview at WebRTC Summit at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
In 2014, Amazon announced a new form of compute called Lambda. We didn't know it at the time, but this represented a fundamental shift in what we expect from cloud computing. Now, all of the major cloud computing vendors want to take part in this disruptive technology. In his session at 20th Cloud Expo, John Jelinek IV, a web developer at Linux Academy, will discuss why major players like AWS, Microsoft Azure, IBM Bluemix, and Google Cloud Platform are all trying to sidestep VMs and containers...
IoT is at the core or many Digital Transformation initiatives with the goal of re-inventing a company's business model. We all agree that collecting relevant IoT data will result in massive amounts of data needing to be stored. However, with the rapid development of IoT devices and ongoing business model transformation, we are not able to predict the volume and growth of IoT data. And with the lack of IoT history, traditional methods of IT and infrastructure planning based on the past do not app...
The many IoT deployments around the world are busy integrating smart devices and sensors into their enterprise IT infrastructures. Yet all of this technology – and there are an amazing number of choices – is of no use without the software to gather, communicate, and analyze the new data flows. Without software, there is no IT. In this power panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, Dave McCarthy, Director of Products at Bsquare Corporation; Alan Williamson, Principal ...
WebRTC has had a real tough three or four years, and so have those working with it. Only a few short years ago, the development world were excited about WebRTC and proclaiming how awesome it was. You might have played with the technology a couple of years ago, only to find the extra infrastructure requirements were painful to implement and poorly documented. This probably left a bitter taste in your mouth, especially when things went wrong.
SYS-CON Media announced today that @WebRTCSummit Blog, the largest WebRTC resource in the world, has been launched. @WebRTCSummit Blog offers top articles, news stories, and blog posts from the world's well-known experts and guarantees better exposure for its authors than any other publication. @WebRTCSummit Blog can be bookmarked ▸ Here @WebRTCSummit conference site can be bookmarked ▸ Here
A critical component of any IoT project is what to do with all the data being generated. This data needs to be captured, processed, structured, and stored in a way to facilitate different kinds of queries. Traditional data warehouse and analytical systems are mature technologies that can be used to handle certain kinds of queries, but they are not always well suited to many problems, particularly when there is a need for real-time insights.
Providing secure, mobile access to sensitive data sets is a critical element in realizing the full potential of cloud computing. However, large data caches remain inaccessible to edge devices for reasons of security, size, format or limited viewing capabilities. Medical imaging, computer aided design and seismic interpretation are just a few examples of industries facing this challenge. Rather than fighting for incremental gains by pulling these datasets to edge devices, we need to embrace the i...
Web Real-Time Communication APIs have quickly revolutionized what browsers are capable of. In addition to video and audio streams, we can now bi-directionally send arbitrary data over WebRTC's PeerConnection Data Channels. With the advent of Progressive Web Apps and new hardware APIs such as WebBluetooh and WebUSB, we can finally enable users to stitch together the Internet of Things directly from their browsers while communicating privately and securely in a decentralized way.
Fifty billion connected devices and still no winning protocols standards. HTTP, WebSockets, MQTT, and CoAP seem to be leading in the IoT protocol race at the moment but many more protocols are getting introduced on a regular basis. Each protocol has its pros and cons depending on the nature of the communications. Does there really need to be only one protocol to rule them all? Of course not. In his session at @ThingsExpo, Chris Matthieu, co-founder and CTO of Octoblu, walked through how Octob...
The Internet of Things can drive efficiency for airlines and airports. In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect with GE, and Sudip Majumder, senior director of development at Oracle, discussed the technical details of the connected airline baggage and related social media solutions. These IoT applications will enhance travelers' journey experience and drive efficiency for the airlines and the airports.
SYS-CON Events announced today that Catchpoint, a leading digital experience intelligence company, has been named “Silver Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Catchpoint Systems is a leading Digital Performance Analytics company that provides unparalleled insight into your customer-critical services to help you consistently deliver an amazing customer experience. Designed for digital business, C...
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo 2016 in New York. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place June 6-8, 2017, at the Javits Center in New York City, New York, is co-located with 20th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry p...
In his General Session at 17th Cloud Expo, Bruce Swann, Senior Product Marketing Manager for Adobe Campaign, explored the key ingredients of cross-channel marketing in a digital world. Learn how the Adobe Marketing Cloud can help marketers embrace opportunities for personalized, relevant and real-time customer engagement across offline (direct mail, point of sale, call center) and digital (email, website, SMS, mobile apps, social networks, connected objects).
Things are changing so quickly in IoT that it would take a wizard to predict which ecosystem will gain the most traction. In order for IoT to reach its potential, smart devices must be able to work together. Today, there are a slew of interoperability standards being promoted by big names to make this happen: HomeKit, Brillo and Alljoyn. In his session at @ThingsExpo, Adam Justice, vice president and general manager of Grid Connect, will review what happens when smart devices don’t work togethe...
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
"Tintri was started in 2008 with the express purpose of building a storage appliance that is ideal for virtualized environments. We support a lot of different hypervisor platforms from VMware to OpenStack to Hyper-V," explained Dan Florea, Director of Product Management at Tintri, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
"There's a growing demand from users for things to be faster. When you think about all the transactions or interactions users will have with your product and everything that is between those transactions and interactions - what drives us at Catchpoint Systems is the idea to measure that and to analyze it," explained Leo Vasiliou, Director of Web Performance Engineering at Catchpoint Systems, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York Ci...
20th Cloud Expo, taking place June 6-8, 2017, at the Javits Center in New York City, NY, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy.
SYS-CON Events announced today that Super Micro Computer, Inc., a global leader in Embedded and IoT solutions, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 7-9, 2017, at the Javits Center in New York City, NY. Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology, is a premier provider of advanced server Building Block Solutions® for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and E...