Welcome!

Apache Authors: Elizabeth White, Pat Romanski, Liz McMillan, Christopher Harrold, Janakiram MSV

Related Topics: @DXWorldExpo, Java IoT, Apache

@DXWorldExpo: Article

Apache Spark: A Key to Big Data Initiatives | @CloudExpo #Microservices

As with other data processing technologies, Spark is not suitable for all types of workloads

Apache Spark continues to gain a lot of traction as companies launch or expand their big data initiatives. There is no doubt that it’s finding a place in corporate IT strategies.

The open-source cluster computing framework was developed in the AMPLab at the University of California at Berkeley in 2009 and became an incubated project of the Apache Software Foundation in 2013. By early 2014, Spark had become one of the foundation’s top-level projects, and today it is one of the most active projects managed by Apache.

Because Spark was optimized to run in-memory, it is capable of processing data much faster than other approaches such as MapReduce. As a result, Spark can provide much higher performance levels for certain types of applications. By enabling programs to load data into a cluster's memory and query it repeatedly, the framework is ideal for machine learning algorithms.

As with other data processing technologies, Spark is not suitable for all types of workloads. But companies launching big data efforts can leverage the framework for a variety of projects, such as interactive queries across large data sets; the processing of streaming data from sensors, as with Internet of Things (IoT) applications; and machine learning tasks.

In addition, developers can use Spark to support other processing tasks, taking advantage of the open source framework’s huge set of developer libraries and application programming interfaces (APIs) and comprehensive support of popular languages such as Java, Python, R and Scala.

Apache Spark has three key things going for it that IT organizations should keep in mind:

  1. The framework’s relative simplicity. The APIs are designed specifically for interacting easily and rapidly with data at scale, and are structured in such a way that enable application developers to use Spark right away.
  2. The framework is designed for speed, operating both in-memory and on disk. Spark’s performance can be even greater when supporting interactive queries of data stored in memory.
  3. Spark supports multiple programming languages as mentioned above, and it includes native support for tight integration with leading storage solutions in the Hadoop ecosystem and beyond.

Spark is proving to be well suited for a number of business use cases and is helping companies to transform their big data initiatives and deliver analytics much faster and with greater efficiency.

One company, a provider of cloud-based predictive analytics software specifically designed for the telecommunications industry, is using the full Spark stack as part of its Hadoop-based architecture on MapR. This has helped the company achieve horizontal scalability on commodity hardware and reduce storage and computing costs.

The new technology stack allows the software company to continuously innovate and deliver value to its telecommunications customers by offering predictive insights from the cloud. Today’s telecommunications data has higher volumes and frequency and more complex structures, particularly with new types of devices generating data for IoT and the use of mobile phones for a fast-growing number of apps. The company needs to use this data to generate predictive insights using data science and predictive analytics, and Spark helps make this possible.

Another business benefiting from Spark is a global pharmaceuticals manufacturer that relies on big data solutions for drug discovery processes. One of the company’s areas of drug research requires lots of interaction with diverse data from external organizations.

Combined Spark and Hadoop workflow and integration layers enable the company’s researchers to leverage thousands of experiments other organizations have conducted, providing the pharmaceuticals company with a significant competitive advantage. The big data solutions the company uses allows it to integrate and analyze data so that it can speed up drug research.

These technologies are now being used for a variety of projects across the enterprise, including video analysis, proteomics, and meta-genomics. Researchers can access data directly through a Spark API on a number of databases with schemas that are designed for their specific analytics needs.

And a third business use case for Spark comes from a service provider that delivers analytics services to various industries. The company deployed the Spark framework in conjunction with its Hadoop big data initiative, and is able to dramatically cut query times and improve the accuracy of analytics results. That has enabled the company to provide enhanced services to its customers.

Clearly, Apache Spark can provide a number of benefits to organizations looking to get the most value out of their information resources and the biggest returns on their big data investments. The framework provides the speed and efficiency improvements companies need to deliver on the promise of big data and analytics.

To further explore the advantages of Spark, see the free interactive eBook, Getting Started with Apache Spark: From Inception to Productionby James A. Scott.

More Stories By Jim Scott

Jim has held positions running Operations, Engineering, Architecture and QA teams in the Consumer Packaged Goods, Digital Advertising, Digital Mapping, Chemical and Pharmaceutical industries. Jim has built systems that handle more than 50 billion transactions per day and his work with high-throughput computing at Dow Chemical was a precursor to more standardized big data concepts like Hadoop.

IoT & Smart Cities Stories
JETRO showcased Japan Digital Transformation Pavilion at SYS-CON's 21st International Cloud Expo® at the Santa Clara Convention Center in Santa Clara, CA. The Japan External Trade Organization (JETRO) is a non-profit organization that provides business support services to companies expanding to Japan. With the support of JETRO's dedicated staff, clients can incorporate their business; receive visa, immigration, and HR support; find dedicated office space; identify local government subsidies; get...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
If a machine can invent, does this mean the end of the patent system as we know it? The patent system, both in the US and Europe, allows companies to protect their inventions and helps foster innovation. However, Artificial Intelligence (AI) could be set to disrupt the patent system as we know it. This talk will examine how AI may change the patent landscape in the years to come. Furthermore, ways in which companies can best protect their AI related inventions will be examined from both a US and...
The challenges of aggregating data from consumer-oriented devices, such as wearable technologies and smart thermostats, are fairly well-understood. However, there are a new set of challenges for IoT devices that generate megabytes or gigabytes of data per second. Certainly, the infrastructure will have to change, as those volumes of data will likely overwhelm the available bandwidth for aggregating the data into a central repository. Ochandarena discusses a whole new way to think about your next...
Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...
Bill Schmarzo, Tech Chair of "Big Data | Analytics" of upcoming CloudEXPO | DXWorldEXPO New York (November 12-13, 2018, New York City) today announced the outline and schedule of the track. "The track has been designed in experience/degree order," said Schmarzo. "So, that folks who attend the entire track can leave the conference with some of the skills necessary to get their work done when they get back to their offices. It actually ties back to some work that I'm doing at the University of ...
DXWorldEXPO LLC, the producer of the world's most influential technology conferences and trade shows has announced the 22nd International CloudEXPO | DXWorldEXPO "Early Bird Registration" is now open. Register for Full Conference "Gold Pass" ▸ Here (Expo Hall ▸ Here)
@DevOpsSummit at Cloud Expo, taking place November 12-13 in New York City, NY, is co-located with 22nd international CloudEXPO | first international DXWorldEXPO and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time t...
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.