Apache Mesos (Europe) Conference Announcement, Martin Kleppmann on Distributed Unix Philosophy and Advanced Algorithms… today’s view into tomorrow’s world!!..
So as the week draws to a close, we are left with a feeling that the pace of understanding in our community and the progress it brings has not slowed one bit! Accordingly, we finish the week on a good note with some highlights to share with everybody bringing us up-to date and ready for a restful weekend and the week to come.
MesosCon (Europe) is coming to Dublin City on Thursday October 8th and Friday October 9th of this year. Supported by the Linux Foundation, the event is organised by the Apache Mesos community, which will bring users, developers and enthusiasts together on this open source project bringing everybody up-to date on the latest and greatest from the Apache Mesos project. You can follow the conference on Twitter @MesosCon and the project @AllThingsMesos. Further information is available at MesosCon (Europe).
Apache Kafka, Samza, and the Unix Philosophy of Distributed Data by Martin Kleppmann is a fascinating dive into Unix/Linux system commands and how the lessons of Unix philosophy are as relevant today as they were back in the 60’s and 70’s. This meaty article is worth reading but make sure you pencil off 15-20 mins to read this article over a cup of coffee! Digestion of its content may take a little longer then expected! Martin argues for the Unix philosophy of which composability (i.e. create one program that does something well and attach it to another via Linux pipes e.g. “awk | sort” for sequenced execution) is the most striking.
The concept of being able to composite a solution for a project that does not have to be built around a Database construct like RDBMS is most striking indeed. In this fashion, Martin introduces Apache Kafka and Samza, which he argues adopted an approach similar to that of the “Unix Philosophy”. Citing Unix’s interface of “stdin” and “stdout”, he highlights the similarity to Kafka running a “topic” through the stream processor in an immutable fashion achieving the same immutable outcome as if you used a functional programming language. There are differences for example that make Kafka more fit for purpose in the 21st century like streaming messages instead of bytes which delivers partly parsed data streams for processing, a coolperformance feature indeed! Martin, brings it all together with an example from LinkedIn showing the flexibility and stability of these Apache projects in today’s operating environment.
So as we head into the weekend and are thinking about how to process a MapReduce job using very large data sets with Hadoop and Spark, there is a book that you should look up by Dr. Mahmoud Parsian called Data Algorithms: Recipes for Scaling Up with Hadoop and Spark. It brings the data science professional through the algorithms and coding needed to scale a distributed big data job on very large data set(s) across a distributed cluster using Hadoop and Spark.
Dr. Mahmoud covers the whole job from Design Patterns, Optimization Techniques and Data Mining to Machine Learning and the tools involved including MapReduce, Hadoop and Spark. His work is favourably reviewed on balance and shines a light on best practice for MapReduce jobs on large data sets.
If there wasn’t enough bedside reading for the weekend, there certainly is now. As another week closes, its a good time to take stock of this weeks accomplishments both personal and in the industry noting our community’s progress in reaching new levels of innovation, efficiency and positive impact on wider society, who asks us to do what we love and do it well!
We are a Big Data company based in Ireland. We are experts in data lake implementations, clickstream analytics, real time analytics, and data warehousing on Hadoop. We can help with your Big Data implementation. Get in touch.