Big Data News – To Data Lakes and Beyond!!

August 21, 2015

How data lakes become toxic to the future of computing and a menu of approximation algorithms

close-up of tree people in a laboratory analyzing data from a whiteboard
As we reach the end of another week, much is still happening in our community despite it being “holiday season”. The sun never sets on great ideas and the sharing of knowledge, which is why we would like to shine a light on some fascinating articles that were reviewed by us through the week.
Kumar Srivastava wrote a very insightful article for Forbes that explored Data Lakes and some of the more common mistakes that make a Data Lake toxic. He explores the database issues that makes it “entity aware” along with sewing database and entity identifiers into records as an effective solution.
Kumar goes onto auditing and how data lakes by design are not audit friendly. The auditability of data in version and tracking terms needs to be built into the data lake so its owners know what is is there and how it is being used. This becomes a very challenging task indeed when data sizes grow into the larger end of the scale with terabyte and exobyte ranges of information requiring structure and tracing audibility to keep the data lake from becoming a data swamp. Data ‘Drop ‘n Go’ simply won’t do in a data lake that has qualitative auditing sewn into its modelling.
imgWomanOnLaptop
Abstruse data can easily be the result if your data lake does not have effective indexing, metadata tags and auditability logs sewn into its directory structures. Data lakes can fill up with data records that cannot be easily retrieved transforming an asset into a liability. Kumar rightly points out that a big data strategy cannot stop at the data lake. It must persist into the data lake to ensure data is fully and correctly modelled in a manner that is traceable, auditable, integrated by record and easily retrieved. Such modelling will of course insulate the data lake against schema evolution in downstream processes it is exposed to. It is in effect using a code forward approach to make it backwards compatible. A fascinating read of how such attention to detail can prevent an iterating disaster from happening in the data lake making it toxic to all who dare thread it’s waters!
So if you are a Star Wars fan with at least 5 seasons of Star Trek under your belt, you will love the concept of quantum computing! Warp speed… engage!!… or at least speed it up Scotty!!.. After reading Karen Eng’s article on Ted, I am left with the impacting impression that Big Data will become speedier and more storage innovative when the multilayered benefits of quantum computing disrupt and revolutionise computing as we know it!
Think of it, the smallest unit in modern computing (bit) will be outdated as computing has a newer and smaller champion to rely on. The atom (atom bit) may become the basis for all computing and in the quantum, the magic will happen! That said, the team based in Switzerland are bit away from getting atoms to work well together beyond their initial success in 2009 using two atoms to create a current. They are working at the Institute of Quantum Electronics on a binding that makes up a circuit as we know it. They have created a circuit about 10 atoms in size which is a great achievement. However, the team estimate about 1 million atoms will be needed to make up a functional quantum computer, which is a bit off yet! The reason for excitement is based on their proofs and ongoing experiments that has the power to change the world. If software is eating the world, then quantum computing may one day eat software!
Thinking of the big data impacts, its not hard to see that faster processing per node on an exponential scale is a huge win for Big Data. Will clusters and its associated overheads go away? I think processing power demand for the resulting growth in data production will still require the power of big data clusters. That said, I believe big data projects like the ones executed by data centers today will be run on a few “inhouse” quantum machines into the future and will also considered small in comparison to the “big data” jobs of the future! We should see big data jobs that are out of reach for single nodes today ran on laptops of the future where the many fold increase in storage and processing power allows our children’s children to run big data jobs that today need a cluster to execute and servers to provide storage for datasets.
As we can see, with quantum computing… the disruption would be massive along with the resulting opportunities. For instance, imagine a RAID iteration that served data striping at an atomic level where (atom) bits were interchangeable in user defined patterns but accessible by a biometric key creating bound data that is at the atomic level compatible with any system you put your thumb print on or run your eye over. Awesome! There is a world of possibilities out there, which delight the intellect if life as we know it goes quantum!
Our final review was of Ted Dunning’s blog article on his Spark Summit 2015 presentation, which was an action packed 15 minutes of data science by MapR’s Chief Applications Architect. If you have seen Ted’s online videos, you will know his knowledgeable yet concise presentations that were compounded into an action packed 15 minutes for the Spark Summit.
He covered a high level review of streaming algorithms that for the most part are approximation algorithms. A highlight of his whistlestop tour of streaming approximation algorithms was the brilliant simplicity around hash indexing and the fault sketch algorithms that use mainly hash to gain coverage of large volumes of indexed data. The settable fault tolerance for sketches creates a quick return for many solutions that don’t require 100% accuracy ergo estimations are ok. It will be interesting to see if non volatile RAM changes the way approximation algorithms are used but for now, it’s streaming data’s little-big friend!
Ted went onto talk about repeated minimums and log log tables which is a cool concept for getting minimum values in a 0 – 1 table of hashed values. The formula 1/(N+1) allows all hashed values to be indexed to 1 and identical value results are counted in the same space returning the repeated minimum. He went into briefly explore count min sketches to find out the popularity of items along with probe using minimum of count. Count min sketches Hi(a) gives a count of several types of hashed value and finds the popular minimum matches. It does however include collided hash values other popular hashed elements skewing the results a little. If there is a lucky break on the collided values through mini k[hi(a)]; the return results will have a notable minimum count value that is matching against less popular elements and thus the answer.
imgTedDunningMapRCircle

Spark Summit 2015 – Ted Dunning’s Presentation

Ted goes onto talk about other cool streaming algorithms such as streaming k-means and t-digest algorithms, which are intended to sketch cluster data and with a fault tolerance produce a (memory efficient) clustered representation of the dataset in a single pass. This streaming k-means algorithm focuses on the k-means centroids within the actual clusters in the sketch producing a statistically relevant sketch of the full dataset. “t-digest” allows you to get a fault tolerant quantiles of high volumes of samples. Ted describes t-digest as “an OLAP cube of sorts, but for all distributions”.
So, another week over and the past, present and future represented once again in our community with little sign of our industry or our passion for it waning! Long may it continue as the weekend greets us with a new week of possibilities after a well earned rest!
About Sonra
We are a Big Data company based in Ireland. We are experts in data lake implementations, clickstream analytics, real time analytics, and data warehousing on Hadoop. We can help with your Big Data implementation. Get in touch.
We also run the Hadoop User Group Ireland. If you are interested to attend or present register on the Meetup website.