Big Data News – Innovative Innovations on Hadoop by Twitter & LinkedIn…
From Twitter’s innovative Namespace Design to LinkedIn’s Gobblin 0.5.0 release, which now includes Apache Kafka integration
The team at Sonra have been impressed by the innovative developments reviewed through the week and the Social Media giants once again come to the forefront for their innovative developments on Hadoop that bear mention for their insightful design responses to problems experienced at scale on Hadoop.
Twitter hosts about 300PB of data on its Hadoop filesystems and thus a ‘huge’ problem with namespaces. They use HDFS Federation helps with scalability (i.e. horizontal namespace scaling allowing better read/write throughput on the cluster due to more nodes being available) and NameNode helps with reliability of the namespace within HDFS. However, Twitter have found that the management of this exponentially more complex namespace configuration is problematic, ergo they solved this issue by developing TwitterViewFS, which is an extension of ViewFS. Basically, TwitterViewFS ‘handles’ user, temp and log files under one “apparent” namespace. With HDFS Federation, the use of URls and multiple Namespaces has a runaway effect (O(n2)) at scale creating mapping overheads, which are solved in Twitter’s use case with TwitterViewFS. They solve this post initialization of the Hadoop client in the cluster configuration directory by basically merging all hdfs namespaces with hdfp links and upon (configuration) merge completion and have a TwitterViewFS namespace for operations. They could You can find out more here about Twitter’s use of Hadoop’s Distributed File System.. Instead of going to the trouble of implementing this workaround Twitter could have just used the MapR Hadoop distribution where the namenode has been replaced by decentralized metadata. The superior solution by far.
Exciting times at LinkedIn, where they have introduced Gobblin V0.5.0. Its notables are varied including the upgrade being supported by the OSS community as the project was open sourced this year. It basically streams 15 disparate data pipelines into one that processes batched and streaming data. The individual features of each data pipeline makes for varied data features, architectures and fault tolerance, so being able to process them on one platform is a great achievement. LinkedIn also advises Gobblin V0.5.0 includes Apache Kafka support, which is new. With Camus retired out of the LinkedIn big data network, Gobblin will be their engine for continuous data ingestion on a 10 minute push rotation to be published on the hour. More can be found about this exciting development at Bridging Batch and Streaming Data Ingestion with Gobblin.
These social media giants have shown us that with great vision and engineering, the march of progress for our ‘big data’ community is ever moving with time. Where to next? I guess we shall wait and see…
We are a Big Data company based in Ireland. We are experts in data lake implementations, clickstream analytics, real time analytics, and data warehousing on Hadoop. We can help with your Big Data implementation. You can get in touch today, we would love to hear from you!