Welcome!

Apache Authors: Yeshim Deniz, Pat Romanski, Lacey Thoms, Sandi Mappic, Michael Bushong

Blog Feed Post

NoHadoop

NoHadoop is not only Hadoop. Why?

According to the 2014 Big Data & Advanced Analytics Survey conducted by the market research firm Evans Data, only 16% of over 400 developers surveyed worldwide indicated that Hadoop batch processing was satisfactory in all use cases. 71% of developers also expressed a need for real-time complex event processing more than half the time in their applications, and 27% said they use it all the time.

Hadoop has evolved from MapReduce and HDFS in the very beginning to a set of technologies, including Hive, HBase, Sqoop, Flume, Pig, Mahout, etc. Though, Hadoop was originally designed for batch processing. It was based on the map and reduce programming model and it is an overstretch for real-time transactions. Various efforts have been made to enhance Hadoop. For example, YARN was designed to decouple the resource management in the underlying execution engine, so non-MapReduce workloads can be handled. Cloudera rewrote the SQL execution on Hive to roll out Impala. Similarly, Hortonworks has a faster SQL interface in Stinger. MapR took a different route by revamping the file system and the HBase table structure for higher performance. Google also unveiled Cloud Dataflow a couple of months ago, which replaces the algorithms behind Hadoop.

It becomes obvious that Hadoop is not enough for Big Data. Hadoop is not always the best tool for particular business needs. For instance, Hadoop is good for broad exploratory analysis of large data sets, while it makes more sense to leverage a RDBMS for an operational analysis of what was uncovered. Relational databases can better store processed, transformed, aggregated and refined data. Semi-structured data suits well in other document-based alternatives like MongoDB or CouchDB. Graph data stores like Neo4j are the best fit for graphic data.

A rule of thumb is choose wisely to use the right technology for what needs to be done. Better yet, a hybrid architecture of Hadoop and NoHadoop provides the best-of-breed combination to fulfill varying requirements.

For more information, please contact Tony Shan ([email protected]). ©Tony Shan. All rights reserved.

Read the original blog entry...

More Stories By Tony Shan

Tony Shan works as a senior consultant/advisor at a global applications and infrastructure solutions firm helping clients realize the greatest value from their IT. Shan is a renowned thought leader and technology visionary with a number of years of field experience and guru-level expertise on cloud computing, Big Data, Hadoop, NoSQL, social, mobile, SOA, BI, technology strategy, IT roadmapping, systems design, architecture engineering, portfolio rationalization, product development, asset management, strategic planning, process standardization, and Web 2.0. He has directed the lifecycle R&D and buildout of large-scale award-winning distributed systems on diverse platforms in Fortune 100 companies and public sector like IBM, Bank of America, Wells Fargo, Cisco, Honeywell, Abbott, etc.

Shan is an inventive expert with a proven track record of influential innovations such as Cloud Engineering. He has authored dozens of top-notch technical papers on next-generation technologies and over ten books that won multiple awards. He is a frequent keynote speaker and Chair/Panel/Advisor/Judge/Organizing Committee in prominent conferences/workshops, an editor/editorial advisory board member of IT research journals/books, and a founder of several user groups, forums, and centers of excellence (CoE).