A week in review - Tableau’s first M&A venture, Machine Learning meets Fine Wine and a Streaming World beyond Batch Processing
This week has been another week of movements around the world with Tableau stepping into the M&A club with its first acquisition venture. We explore how a UCL academic and former wine trader developed machine learning algorithms to increase the accuracy of wine price prediction along with the first of a two part special on big data processing.
So, the big news this week in our industry starts with Tableau’s acquisition of Infoactive who produce dynamic data visualisation of type infographic. Their product basically interprets dynamically built data sets from multiple sources enabling smart analysis of the data as a ‘single’ dataset. John Kennedy of SiliconRepublic.com writes about the Tableau acquisition covering the Canadian start up success story, who won best Bootstrap Company in 2013 at SXSW and obviously caught the eye of the then 10 year old industry incumbent Tableau. After launching its first beta in 2014, it has sailed into the good graces and ownership of Tableau Software.
I do like the structured approach to the acquisition, which can be seen in the announcement from Infoactive showing a structured customer centric approach to data management. The product line of data visualisation and social media sharing tools will fit Tableau’s product range nicely as will its skilled staff of 3 who bring quality experience with them to the 12 year old Data Visualisation Company.
Whilst most enjoy a glass of wine, nobody really enjoys the sometimes sky high prices paid for fine wine. It would appear former trader and UCL (University College London) academic, Dr. Tristan Fletcher agrees. Techcrunch covered Dr. Fletcher’s work, which is an interesting read for our Big Data community, yet a revolutionary development in the world of fine wines. He cited the unpredictable nature of fine wine prices that motivated him to develop advanced machine learning algorithms to extrapolate predicted prices based on a low frequency trading environment. Dr. Fletcher with UCL research students like Michelle Yeo collaborated on the development of his ideas for fine wine based on AI machine learning concepts. In short, the development is drawn from algorithms such as the ‘Gaussian process’’ and ‘Multi-Task Learning’.
Gaussian process is a multivariate regressive algorithm that distributes variants equally in an environment. It is in my mind the quantitative distribution of normal data variants over many functions, deriving uniformity, thus predictability from seemingly unrelated data points. The core formula of Xt, t ∈ T is arguably the kernel of this multivariate process.
Multi-Task Learning is a machine learning process approach that learns a problem in parallel with other tasks related to it. It uses shared representation through common denominators found between the main problem and the supporting tasks. I think it is a groovy way to improve generalisation and increase machine learning accuracy. Dr. Fletcher’s background in AI research is not gone amiss here with this creative adoption of AI related algorithms for the fine wine market. His algorithms are better able to predict wine prices with a higher degree of accuracy than traditional trading methods, but in an industry that is “trading to drink” (aka. sell one case to fund the drinking of another) and not trading to profit, the exercise is somewhat academic at the moment. Dr. Fletcher has formed a wine asset management company called Invinio, which plans to collaborate with UCL to refine these machine learning algorithms into the future.
Tyler Akidau, Software Engineer at Google has a fascinating article (part 1 of 2) on data streaming along with a compelling argument for its wider adoption and development. Tyler's two part series is meaty so give yourself 45 mins to read his article on data streaming, which is as fascinating as it is detailed! He explores some of the myths around data streaming of bound and unbound data (streaming) along with some misunderstanding about the effectiveness of batch processing in big data. He does a great job of exploring data streaming processes including time agnostic and windowing, which are particularly fascinating to me as clever processing techniques that can achieve high consistency and low latency. Tyler ably argues that consistency does not have to be sacrificed for low latency in the data streaming process.
Part 2 of Tyler’s “two parter” (could yet be a mini series!) focuses on what is in a dataflow model, a comparison of data models and examples of why batch processing and dual pipeline structures like Lambda should be relegated to the history anals of Big Data… We shall await the next installment.
Overall, the week has shown us once again how quick moving and insightful our industry is, which is no surprise as “insight” is in the DNA of the big data industry! What next week brings is anybody’s guess but I bet we could model a prediction that would be as impressively accurate as our journey to date! Have a great weekend everybody!!
We are a Big Data company based in Ireland. We are experts in data lake implementations, clickstream analytics, real time analytics, and data warehousing on Hadoop. We can help with your Big Data implementation. Get in touch.