War of the Hadoop SQL engines. And the winner is …?
You may have wondered why we were quiet over the last couple of weeks? Well, we locked ourselves into the basement and did some research and a couple of projects and PoCs on Hadoop, Big Data, and distributed processing frameworks in general. We were also looking at Clickstream data and Web Analytics solutions. Over the next couple of weeks we will update our website with our new offerings, products, and services. The article below summarises some of our research on using SQL on Hadoop.
I believe that one of the pre-requisites for Hadoop to make inroads into the Enterprise Data Warehouse space is to have the following three items in place: (1) Subsecond response times for SQL queries (often refered to as interactive or real time queries). Performance similar to existing MPP RDBMS such as Teradata. (2) Support for a rich SQL feature set (3) Support for Update and Delete DML operations. Currently, I don’t see any of the existing solutions ticking all of these boxes. However, we are getting closer and closer. The post will shed some light on the current status of SQL on Hadoop and my own recommendations, which of these solutions you should bet your house on.
Initially developed by Facebook, Hive is the original SQL framework on Hadoop. The motivation to develop Hive was to provide an abstraction layer on top of Map Reduce (M/R) to make it easier for analysts and data scientists to query data on the Hadoop File System. Rather than write hundreds of lines of Java code to get answers to relatively simple questions the objective was to offer SQL, the natural choice of the data analyst. While this approach works well in a batch oriented environment it does not perform well for interactive workloads in near real time. The problem with the original M/R framework was that it works in stages and at each stage the data is set down to disk and then again read from disk in the next phase. In addition the various stages can not be parallelized. This is highly inefficent and the rationale for the Apache Tez project. Similar to M/R, Tez is a Hive execution engine developed by Hortonworks (also committers from Facebook, Microsoft, and Yahoo).
Hive on Apache Tez
Tez is part of the Stinger initiative led by Hortonworks to make Hive enterprise ready and suitable for realtime SQL queries. The two main objectives of the initiative were to increase performance and offer a rich set of SQL features such as analytic functions, query optimization, and standard data types such as timestamp etc. Tez is the underlying engine that creates more efficient execution plans in comparison to Map Reduce. The Tez design is based on research done by Microsoft on parallel and distributed computing. The two main objectives were delivered as part of the recent Hive 0.13 release. The roadmap for release 0.14 includes DML functionality such as Updates and Inserts for lookup tables.
Hive on Spark
Recently, Cloudera together with MapR, Intel, and Databricks spearheaded a new initiative to add a third execution engine to the mix. They propose to add Spark as a third Hive execution engine. Developers then will be able to choose between Map Reduce, Tez, and Spark as their execution engine for Hive. Based on the design document the three engines will be fully interchangeable and compatible. Cloudera see Spark as the next generation distributed processing engine, which has various advantages over the Map Reduce paradigm, e.g. intermediate resultsets can be cached in memory. Going forward, Spark will underpin many of the components in the Cloudera platform. The rationale for Hive on Spark then is to make Spark available to the vast amount of Hive users and establish Hive on the Spark framework. It will also allow users to run faster Hive queries without having to install Tez. Contrary to Hortonworks, Cloudera don’t see Hive on Spark (or Hive on Tez) to be suitable as a realtime SQL query engine. Their flagship product for interactive SQL queries is Impala, while Databricks see Spark SQL as the tool of choice for realtime queries.
Impala is a massively parallel SQL query engine. It is based on Google Dremel and Google Big Query.
. Alternatively, you can add the Cloudera repository and download it from there. Impala has various per-requisites in terms of the libraries and respective versions it supports, e.g. it relies (as of this post’s writing) on Hive 0.12 and Hadoop 2.3. So if you want to install it on the latest Hortonworks distro, you are out of luck.
Facebook was and is a heavy user of Hive. However, for some of their workloads they required low latency response times in an interactive fashion. This is behind the rationale of Presto.
Spark SQL and Shark
Similar to Impala, Apache Drill is another MPP SQL query engine inspired by the Google Dremel paper. Apache Drill is mainly supported by MapR. At the moment it is in alpha release. Together with Spark SQL It is at the moment of this writing the least mature SQL solution on Hadoop. As outlined by MapR Apache Drill will be available Q2 2014.
Probably the most mature of the SQL Hadoop engines is BigSQL from IBM. I recently had the opportunity to attend a session by one of their chief architects and it looks quite impressive, e.g. it builds on the DB2 optimizer and as a result is built on decades of experience. As this post mainly deals with open source engines I won’t go into any more detail on BigSQL. However, if you want to find out more have a look at this video.
This benchmark by Cloudera compares Impala to Shark (disk and memory), Hive Tez (0.13), and Presto. Unsurprisingly, Cloudera Impala scores best here :-).
Conclusion and Recommendation
As of this writing the most mature product with the richest feature set is Apache Hive (running on Tez). Crucially it offers analytic functions, support for the widest set of file formats, and ACID support (full support in release 0.14)
So what to do? Right now I would run both batch style queries (ETL) and interactive queries on Hive Tez as Hive offers the richest SQL feature set, especially analytic functions and supports a wide set of file formats. If you don’t get satisfactory query performance for your realtime queries you may want to look at some of the other engines. Impala is a mature solution. However, it lacks support for analytic functions, which are crucially important for data analysis tasks. Analytic functions will be added to the next release of Impala though. Another option is Presto, which offers this feature set. At this stage Spark SQL is only in alpha release and does not yet look very mature especially in terms of the SQL features. However, it is quite promising for in database style machine learning and predictive analytics (bring the processing to the data rather than data to processing). Apache Drill is also only in alpha release and may not be mature enough for your use case. If I had to bet my house on which of the solutions will prevail I would put it on a combination of Hive on Spark (for batch ETL) and Spark SQL (interactive queries and in database style machine learning and predictive analytics) to cover all use cases and workloads. If Spark SQL matures further in terms of the SQL feature set (analytic functions etc.) and allows for ETL based on the SQL paradigm I would exclusively put my money on it.