Welcome!

Apache Authors: Elizabeth White, Pat Romanski, Liz McMillan, Christopher Harrold, Janakiram MSV

Related Topics: @CloudExpo, Linux Containers, Open Source Cloud, Apache, @DXWorldExpo

@CloudExpo: Article

Apache Spark vs. Hadoop | @CloudExpo #BigData #DevOps #Microservices

A choice of job styles

If you’re running Big Data applications, you’re going to want to look at some kind of distributed processing system. Hadoop is one of the best-known clustering systems, but how are you going to process all your data in a reasonable time frame? Apache Spark offers services that go beyond a standard MapReduce cluster.

A choice of job styles
MapReduce has become a standard, perhaps
the standard, for distributed file systems. While it’s a great system already, it’s really geared toward batch use, with jobs needing to queue for later output. This can severely hamper your flexibility. What if you want to explore some of your data? If it’s going to take all night, forget about it.

With Apache Spark, you can act on your data in whatever way you want. Want to look for interesting tidbits in your data? You can perform some quick queries. Want to run something you know will take a long time? You can use a batch job. Want to process your data streams in real time? You can do that too.

The biggest advantage of modern programming languages is their use of interactive shells. Sure, Lisp did that back in the ‘60s, but it was a long time before the kind of power to program interactively became available to the average programmer. With Python and Scala you can try out your ideas in real time and develop algorithms iteratively, without the time-consuming write/compile/test/debug cycle.

RDDs
The key to Spark’s flexibility is the Resilient Distributed Datasets, or RDDs. RDDs maintain a lineage of everything that’s done to your data. They’re fine-grained, keeping track of all changes that have been made from other transformations such as
map or join. This means that it’s possible to recover from failures by rebuilding from these transformations (which is why they’re called Resilient Distributed Datasets).

RDDs also represent data in memory, which is a lot faster than always pulling data off of disks—even with SSDs making their way into data centers. While having your data in memory might seem like a recipe for slow performance, Spark uses lazy evaluation, only making transformations on data when you specifically ask for the result. This is why you can get queries so quickly even on very large datasets.

You might have recognized the term “lazy evaluation” from functional programming languages like Haskell. RDDs are only loaded when specific actions produce some kind of output; for example, printing to a text file. You can have a complex query over your data, but it won’t actually be evaluated until you ask for it. And the query might only find a specific subset of your data instead of plowing through the whole thing. This lazy evaluation lets you create complex queries on large datasets without incurring a performance penalty.

RDDs are also immutable, which leads to greater protection against data loss even though they’re in memory. In case of an error, Spark can go back to the last part of an RDD’s lineage and recover from there rather than relying on a checkpoint-based system on a disk.

Spark and Hadoop, Not as Different as You Think
Speaking of disks, you might be wondering whether Spark replaces a Hadoop cluster. That’s really a false dichotomy. Hadoop and Spark work
together. While Spark provides the processing, Hadoop handles the actual storage and resource management. After all, you can’t store data in your memory forever.

With the combination of Spark and Hadoop in the same cluster, you can cut down on a lot of overhead in maintaining different clusters. This combined cluster will give you unlimited scale for Big Data operations.

Who’s Using Spark?
When you have your Big Data cluster in place, you’ll be able to do lots of interesting things. From genome sequencing analysis, to digital advertising to a major credit card company who uses Spark to match thousands of transactions at once
for possible fraud detection. Cisco does something similar with a cloud-based security product to spot possible hacking before it turns into a major data breach. Geneticists use it to match genes to new medicines.

Conclusion
Apache Spark builds on Hadoop and then goes beyond it by adding stream processing capabilities. The MapR distribution is the only one that offers everything you need right out of the box to enable real-time data processing.

For a more in-depth view into how Spark and Hadoop benefit from each other, read chapter four of the free interactive ebook: Getting Started with Apache Spark: From Inception to Production, by James A. Scott.

More Stories By Jim Scott

Jim has held positions running Operations, Engineering, Architecture and QA teams in the Consumer Packaged Goods, Digital Advertising, Digital Mapping, Chemical and Pharmaceutical industries. Jim has built systems that handle more than 50 billion transactions per day and his work with high-throughput computing at Dow Chemical was a precursor to more standardized big data concepts like Hadoop.

IoT & Smart Cities Stories
Early Bird Registration Discount Expires on August 31, 2018 Conference Registration Link ▸ HERE. Pick from all 200 sessions in all 10 tracks, plus 22 Keynotes & General Sessions! Lunch is served two days. EXPIRES AUGUST 31, 2018. Ticket prices: ($1,295-Aug 31) ($1,495-Oct 31) ($1,995-Nov 12) ($2,500-Walk-in)
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
The challenges of aggregating data from consumer-oriented devices, such as wearable technologies and smart thermostats, are fairly well-understood. However, there are a new set of challenges for IoT devices that generate megabytes or gigabytes of data per second. Certainly, the infrastructure will have to change, as those volumes of data will likely overwhelm the available bandwidth for aggregating the data into a central repository. Ochandarena discusses a whole new way to think about your next...
CloudEXPO | DevOpsSUMMIT | DXWorldEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
All in Mobile is a place where we continually maximize their impact by fostering understanding, empathy, insights, creativity and joy. They believe that a truly useful and desirable mobile app doesn't need the brightest idea or the most advanced technology. A great product begins with understanding people. It's easy to think that customers will love your app, but can you justify it? They make sure your final app is something that users truly want and need. The only way to do this is by ...
DXWorldEXPO LLC announced today that Big Data Federation to Exhibit at the 22nd International CloudEXPO, colocated with DevOpsSUMMIT and DXWorldEXPO, November 12-13, 2018 in New York City. Big Data Federation, Inc. develops and applies artificial intelligence to predict financial and economic events that matter. The company uncovers patterns and precise drivers of performance and outcomes with the aid of machine-learning algorithms, big data, and fundamental analysis. Their products are deployed...