30% of all Enterprises will use intermediaries for big data by 2017
Another week has passed and our big data community has been busy around the world. Some interesting movements in our industry has arisen with Doug Laney on Forbes making three predictions on the advancement of big data to 2020, including the rise of 3rd party big data contractors helping industry unleash the value of their data. Clare McDonald of MapR gives an eloquent overview of Apache Drill and to wrap it up, Katherine Noyes covers the adoption this week of Apache Hadoop into its Hana Nova big data platform.
Doug Laney has written in Forbes about some interesting predictions for our big data community over the coming few years and using Gartner’s insights, predicts an overall advancement of big data adoption around the world. The business intelligence impact is not lost on the reader as he predicts a rapid escalation of digital engagement by consumers. He predicts by 2020, “information will be used to reinvent, digitalize or eliminate 80% of business processes and products from a decade earlier.” A bold prediction indeed, yet thinking about the culture of centralisation and automation sweeping the business world, such predictions may just be heralded in hindsight as great prediction from Gartner. Doug goes onto predict that by 2017, 30% of all big data access will be done through intermediaries, which makes sense given the market of today. Sonra Intelligence Ltd would certainly be pleased to expand that prediction past 30% through its services in big data ingestion and consulting. Finally, Doug predicts that by 2017, “more than 20% of customer-facing analytic deployments will provide product tracking information leveraging the IoT”. Given the tracking technology available, it’s not hard to see that prediction becoming true.
Apache Drill is a great tool to have in your distributed framework adding a smart layer to your nodes in any cluster. Carol McDonald, HBase/Hadoop Instructor with Map R provided a wonderful overview of how Apache Drill works in a distributed framework. She gives a nice background into the Google “Big Table” paper in 2004 and how open source HDFS uses data locality in queries and processing under the traditional “Map-Reduce” model. She then goes into Apache Drill’s features and how it creates a good alternative for distributed SQL queries (i.e. Drillbit) noting Drill has its own SQL execution engine and does not rely on others (Map Reduce, etc). Carol then shows how versatile ‘Drill’ is with the smart functionality on every node increasing performance and processing at scale using nodes to sub-clusters (in large clusters) preserving data locality then to the foreman, which returns the result to the client. In a distributed SQL query for example, your distributed Leaf Fragments (nodes) return data fragments to intermediate fragments (sub-clusters) and then with some processing at the intermediate fragment level gets returned to the root fragment (foreman) for final processing and return to the client. This is a key feature used in parallelisation.
She then talks about a key performance feature of Drill, which is its ability to run distributed columnar SQL queries, which is a notable performance boost in large distributed queries in particular. Instead of having to execute a data query that runs whole rows searching for columnar data, Drill organises tabular data in terms of columnar access rather than the traditional table based approach. Made possible by in-memory query execution and good design, this selective data access approach reduces latency and increases processing performance that is most noticeable on a large scale distributed query job!
Apache Hadoop Adoption by SAP has featured in the news this week. Katherine Noyes from IDG News Service covered the announcement from SAP during the week on their adoption of Hadoop for their in-memory big data platform Hana Vora. In short, SAP’s platform taps into the versatile big data framework Apache Spark and runs big data analytics/OLAP initially from Hana (existing SAP big data analytics platform) and now also from Hadoop. Live (streaming) analytics are the key feature of this new offering from SAP and with Apache Spark powering such functionality, it is not hard to see why SAP are announcing this multi framework platform (Hana/Hadoop) for enterprise. This new product from SAP uses multiple Apache products and is usable in the language of choice for the Data Scientist such as a R, Python, Java, Scala, C & C++. It also has advanced machine learning and data analytics tools that you would expect in Hana and Hadoop rolled into the enhanced Spark layer that connects Hana Vora to Hadoop with minimal data replication, whilst maintaining a massively distributed framework. It’s heralded as an efficient and flexible platform capable of distributed data tiering through rule based DLM. Its other key features are OLAP Hadoop, Parallelisation and of course open language support. A neat governance feature is the centralised “Hana Vora cockpit” for admins that makes integrated administration possible over the frameworks from a central admin ID.
As we head into another weekend, the predictions, advancements and insights shared can only give one a positive view of the future for big data and its positive impact on us all! Long may it continue!
We are a Big Data company based in Ireland. We are experts in data lake implementations, clickstream analytics, real time analytics, and data warehousing on Hadoop. We can help with your Big Data implementation. Get in touch.