Welcome!

Apache Authors: Pat Romanski, Liz McMillan, Elizabeth White, Christopher Harrold, Janakiram MSV

Blog Feed Post

Big Data Needs a Better Network

Earlier this week I had some interesting conversations with @davehusak. Where the conversation started early in the day with a discussion on overlay networks and what network functions are performed where and in what context, later in the afternoon the discussion moved to networking solutions (and specifically Plexxi solutions) for big data applications.

It’s easy to jump to Hadoop or similarly structured cluster computing applications (Spark, Storm, or a long list of others) as the definition of a big data application. With all its simplicity for the overall distribution of work, Hadoop is a fairly tough network problem to solve if you want to do anything more than “throw bandwidth at the problem”. And when you do throw bandwidth at the problem, the extreme burstiness of the traffic will still significantly drag down the performance of the overall solution. And for many, CPU cycles to reduce the data is not the biggest challenge, storage and movement of data throughout a big data cluster is the biggest pain point. Intel has done a fine job providing compute firepower that far outpaces the evolution of network capacity.

The network plays a significant factor in several stages of a Hadoop solution cycle. It starts with chopping the to-be-analyzed data into chunks and distributing it across the datanodes. Hadoop has a notion of a rack and it has some basic intelligence when placing data and jobs that work on that data. By default the data will be replicated 3 times across at least 2 racks, if racks have been defined. The data to be distributed is easily in the 100s of Gigabytes or even Terabytes, so triple that data is being moved throughout the Hadoop cluster to the datanodes.

Once distributed, the actual Map jobs are launched against that data, these are the tasks that take the data and perform a first pass mapping into (in its most basic form) <key, value> tuples. Again here there is an attempt to have jobs work on local data, where local can be defined as local to the server that has that chunk of data or local to the rack, in an attempt to avoid as much cross rack communication as possible. This is based on the assumption that cross rack communication is much more constrained and aggregated and therefore more prone to congestion and packet loss.

Once the mapper jobs complete their task, the results of the mapping exercise is sent to reducers. Reducers take the <key, value> information and essentially tally the results. This transfer is the most taxing part of a Hadoop cycle on the network. Since most Hadoop mapping jobs run the same function on a similar sized dataset, that first set of mappers will all complete their task at about the same time and will all start sending their results to the same set of reducers, creating 1) a lot of traffic and 2) a lot of traffic to the same set of destinations.  Depending on the amount of data, mappers and compute nodes, this cycle repeats (the next set of mapping jobs are fired off) and at the end of each cycle a very significant spike in network traffic appears. At the very end, all results are brought together for one last spike in traffic. Each one of these network events is a source of significant congestion.

Many variables contribute to the overall performance of the Hadoop solution. What is the relationship between the chunks of data and the amount of servers and jobs? How many reducers are used? Where are the reducers in relation to the mappers? Is the Map function compute heavy or I/O heavy? How aggressive is the speculative scheduling that allows the same data to be worked on by multiple mappers?

With that many variables that can be tuned, and with so many variables different from one analysis to the next, it is hard to imagine that a single network design or implementation provides the best supporting infrastructure. There are assumptions in Hadoop placement of data and jobs that can easily be altered. The basic concept of a rack can easily expanded into a multi layer locality definition. With the right tools in the network, the definition of a rack, or even the locality and closeness of nodes in a cluster or virtual cluster can be adjusted according to the analysis to be completed.

We tend to give our applications variables to tune its performance based on what the network provides. It is time that the network adjusts itself based on the application needs. In a clustered application like Hadoop, there is lots of knowledge and even some predictability of network traffic. Wouldn’t that make for a great opportunity to infuse the network with some of that knowledge and have it morph itself to provide the best possible service? And Hadoop is not unique, cluster compute framework almost all carefully track placement of data and compute jobs, which makes them all great candidates to share some of that information with a smart network.

There are far simpler big data needs and applications that can and should be supported by flexible networks. Storage networks are still often separated from data networks for performance reasons and fear of interference. If you could actually separate the various types of data with logically or even physically different paths in the same network, would you still?

 

[Today's fun fact: An MLB baseball lasts on average 7 pitches. A google search asking how many baseballs are used in a single MLB season returns several pages worth of different answers. They must all be correct, it's on the Internet afterall.]

The post Big Data Needs a Better Network appeared first on Plexxi.

Read the original blog entry...

More Stories By Marten Terpstra

Marten Terpstra is a Product Management Director at Plexxi Inc. Marten has extensive knowledge of the architecture, design, deployment and management of enterprise and carrier networks.

@ThingsExpo Stories
DX World EXPO, LLC, a Lighthouse Point, Florida-based startup trade show producer and the creator of "DXWorldEXPO® - Digital Transformation Conference & Expo" has announced its executive management team. The team is headed by Levent Selamoglu, who has been named CEO. "Now is the time for a truly global DX event, to bring together the leading minds from the technology world in a conversation about Digital Transformation," he said in making the announcement.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of the 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to gre...
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and shared the must-have mindsets for removing complexity from the develop...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
"Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...
"MobiDev is a software development company and we do complex, custom software development for everybody from entrepreneurs to large enterprises," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
"There's plenty of bandwidth out there but it's never in the right place. So what Cedexis does is uses data to work out the best pathways to get data from the origin to the person who wants to get it," explained Simon Jones, Evangelist and Head of Marketing at Cedexis, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.