Self-Service Analytics in the Data Lake

July 23, 2015

Deriving Value from your data…

large data base
perceptual storage

We all know that decision makers and analysts need quick access to all of the structured and unstructured data in an enterprise. Typically this data is locked away in various source systems and can’t be queried from a single central location

The data warehouse set out to fix this problem ages ago and it has served us well in this function over the last 25 years. However, if you have ever worked on a data warehouse project you will be aware of the effort and resources it takes to get data first into the EDW and then out again to decision makers. Anywhere between 3-9 months is spent to deliver a new subject area to the business. This time is spent on all sorts of tasks in a very formal process.

  • The business analysts gather the requirements.
  • Data analysts profile the data and document DQ defects.
  • Source to target maps are created.
  • Data modellers then come up with conceptual, logical, physical, and dimensional models. They then integrate the new data sets into the Enterprise Data Model.
  • The IT department needs to commit a legion of ETL developers to the job. The ETL developers code ‘data flows’ to move the data into the warehouse and apply relevant transformations on the way.
  • All sorts of testing has to be done. It starts with unit testing then moves on to system/integration testing, UAT, and sometimes performance testing.

While all of this activity goes on, changes to the source systems take place that need to be dealt with as well. Several thousand data flows have created a web of complex interdependencies that is hard to manage. Changes in one area ripple through all of the other components. Think refactoring, redevelopment, remodelling, and lots of regression testing.

In the meantime, if a decision maker wants to get insights from the data we are back to square one. An army of business analysts loads data from the source applications and tries to stitch that data together in Excel spreadmarts. This takes weeks and drives people up the wall. The accuracy of these efforts is also questionable.

Various efforts such as bringing agile methodologies and scrum to the data warehouse have been made with mixed success. From our experience it certainly can cut through some of the overhead. However, implementing a data warehouse is still a highly complex and formalised process with a lot of process overhead.

Can we not have our cake and eat it too?

Two developments over the last couple of years have made it possible to quickly generate insights from data sets that have not yet been loaded to the data warehouse:

  • The rise of cheap storage solutions and distributed ‘compute’ frameworks such as Flink or Spark. This allows us to centrally store and process large volumes of data in an enterprise data hub (aka data lake etc.).
  • Data discovery and self-service analytics tools such as Datameer have matured. It is now possible for a non-IT employee (typically some sort of data worker) to apply data transformations through an intuitive user interface without having to write any code (not even SQL). True, these guys still need to have a good grasp of data related concepts. However, they don’t have to be coders.

You may wonder, where does this leave the data warehouse? Don’t worry! The data warehouse as a concept of highly reliable and integrated data is here to stay. However, the new technologies and concepts now allow us to turn around answers to new data sets much quicker than in the past. And all of this with minimal involvement of IT. ‘Heaven!

We have identified three areas where the data lake can help out.

  • A central data hub for all of an enterprises’ data – structured and unstructured.
  • A central location to audit changes to data.
  • A central place to evolve a schema over time and record historic schema changes.
  • A central place for metadata management.
  • A playground environment for data workers to explore data sets that are not available in the data warehouse.
  • A feeder application for the data warehouse, e.g. the staging area can be offloaded to the data lake.
  • Insights derived from analysis in the data lake can be feed into the data warehouse development lifecycle thereby shortening the time it takes to implement a subject area in the data warehouse.

We will look at the components of the data lake and self-service analytics in more detail in our next post. We will also explore some of the items you need to address when implementing a data lake.

About Sonra

We are a Big Data company based in Ireland. We are experts in data lake implementations, clickstream analytics, real time analytics, and data warehousing on Hadoop. We can help with your Big Data implementation. Get in touch.

We also run the Hadoop User Group Ireland. If you are interested to attend or present register on the Meetup website.

Also part of the series on data warehousing in the age of Big Data