Launching 2016 in style with an exploration of Yahoo’s successful scaling of aggregate computational queries using data sketching libraries to Apache Spark releasing Spark 1.6
Firstly, the team at Sonra would like to wish you and yours every success in 2016. As the arrow of time pushes us forward, our Big Data industry is forging ahead in a cycle of continuous improvement that never fails to impress. Our review for the week brings us to Yahoo on their solution for scalable computational queries and Apache Spark’s release of version 1.6.
Yahoo have shared their experiences in Data Sketching using the Java (SDK7/8) Library DataSketches, which applies data sketching to very large data streams with a confidence level of ~95%. The margin of error is configurable against the accuracy level in large data streams where problems like partitioning and parallelisation are a real issue. They often present as business limitations and processing drains on resources. Yahoo have successfully used this open source Java library to reduce the processing steps from thousands of steps to a few dozen thanks to the mergeability and additivity properties that DataSketches brings to an otherwise very complicated set of processes. Processing times of days are now down to hours with fewer resources and business level querying is now returnable near real time (15 second lag). Also of note, there are adaptors making integration into Hadoop Hive, Hadoop Pig and Druid possible. The notable benefits of DataSketches in Yahoo’s use case are clearly the successful data scaling and resulting analysis made possible by this open source library.
Apache Spark has just released version 1.6 with big improvements in the areas of performance, API and Data Science functionality. There is a clear view in the industry (91%) that Parquet data format is the current favourite for file formats. This release has a new “Parquet reader” giving an improvement claim of ~50% based on the processing of 2.9 million rows to 4.5 million rows per second over 5 columns. Other areas of performance are automated memory management where the new memory manager dynamically tunes the memory size of a (memory) region for runtime efficiency. Also, in streaming data management, mapWithStateAPI allows linear mapping of updates, rather than all records, which returns process enhancements for the user. The new ‘Dataset API’ sees improvements in the area of Dataframe compile time type safety by extending DataFrame class so that static typing and user function types in Java and Scala are supported. Finally, Data Science functionality has been extended by popular demand to include machine learning pipeline persistence so that models and training states from prior jobs are reusable in future jobs. Also, new algorithms deployed in this release include univariate and bivariate statistics ( also in data frames), survival analysis, bisecting K-means clustering plus more. It’s a substantial release with key improvements driven by the open source community whose contributions have risen from 500 in 2014 to 1000 in 2015. Great stuff indeed.
As the first week of 2016 ends, we have a real sense of excitement when we see the massive improvements in our industry which are driven by the open source community. It’s amazing what collaboration brings and long may it continue!
We are a Big Data company based in Ireland. We are experts in data lake implementations, clickstream analytics, real time analytics, and data warehousing on Hadoop and Spark. We also run the Hadoop User Group (HUG) Ireland. We can help with your Big Data implementation. You can get in touch today, we would love to hear from you!