Data lake

A data lake is a method of storing data within a system or repository, in its natural format,[1] that facilitates the collocation of data in various schemata and structural forms, usually object blobs or files.

Invention

James Dixon, then chief technology officer at Pentaho coined the term[2] to contrast it with data mart, which is a smaller repository of interesting attributes extracted from raw data.[3] He argued that data marts have several inherent problems, and that data lakes are the optimal solution. These problems are often referred to as Information siloing.PricewaterhouseCoopers said that data lakes could "put an end to data silos.[4] In their study on data lakes they noted that enterprises were "starting to extract and place data for analytics into a single, Hadoop based repository."

Characteristics

The idea of data lake is to have a single store of all data in the enterprise ranging from raw data (which implies exact copy of source system data) to transformed data which is used for various tasks including reporting, visualization, analytics and machine learning.

The data lake includes structured data from relational databases (rows and columns), semi-structured data (CSV, logs, XML, JSON), unstructured data (emails, documents, PDFs) and even binary data (images, audio, video) thus creating a centralized data store accommodating all forms of data.

Examples

One example of a data lake is the distributed file system Apache Hadoop.

Many companies also use cloud storage services such as Amazon S3.[5] There is a gradual academic interest in the concept of data lakes, for instance, Personal DataLake[6] an ongoing project at Cardiff University to create a new type of data lake which aims at managing big data of individual users by providing a single point of collecting, organizing, and sharing personal data.[7]

The earlier data lake (Hadoop 1.0) had limited capabilities with its batch oriented processing (Map Reduce) and was the only processing paradigm associated with it. Interacting with the data lake meant you had to have expertise in Java with map reduce and higher level tools like Pig & Hive (which by themselves were batch oriented). With the dawn of Hadoop 2.0 and separation of duties with Resource Management taken over by YARN (Yet another resource negotiator), new processing paradigms like Streaming, interactive, on-line have become available via Hadoop and the Data Lake.

Criticism

In June 2015 David Needle characterized "so-called data lakes" as "one of the more controversial ways to manage big data".[8] PricewaterhouseCoopers were also careful to note in their research that not all data lake initiatives are successful. They quote Sean Martin, CTO of Cambridge Semantics,

We see customers creating big data graveyards, dumping everything into HDFS [Hadoop Distributed File System] and hoping to do something with it down the road. But then they just lose track of what’s there.[4]

They advise that "The main challenge is not creating a data lake, but taking advantage of the opportunities it presents."[4] They describe companies that build successful data lakes as gradually maturing their lake as they figure out which data and metadata are important to the organization.

References

  1. The growing importance of big data quality
  2. Woods, Dan (21 July 2011). "Big data requires a big architecture". Tech. Forbes.
  3. Dixon, James. "Pentaho, Hadoop, and Data Lakes". James Dixon’s Blog. James. Retrieved 7 November 2015 He wrote: "If you think of a datamart as a store of bottled water – cleansed and packaged and structured for easy consumption – the data lake is a large body of water in a more natural state. The contents of the data lake stream in from a source to fill the lake, and various users of the lake can come to examine, dive in, or take samples.". Check date values in: |access-date= (help)
  4. 1 2 3 Stein, Brian; Morrison, Alan (2014). Data lakes and the promise of unsiloed data (pdf) (Report). Technology Forecast: Rethinking integration. PricewaterhouseCooper.
  5. Tuulos, Ville (22 September 2015). "Petabyte-Scale Data Pipelines with Docker, Luigi and Elastic Spot Instances".
  6. http://ieeexplore.ieee.org/xpl/abstractAuthors.jsp?reload=true&arnumber=7310733
  7. http://www.researchgate.net/publication/283053696_Personal_Data_Lake_With_Data_Gravity_Pull
  8. Needle, David (10 June 2015). "Hadoop Summit: Wrangling Big Data Requires Novel Tools, Techniques". Enterprise Apps. eWeek. Retrieved 1 November 2015. Walter Maguire, chief field technologist at HP's Big Data Business Unit, discussed one of the more controversial ways to manage big data, so-called data lakes.
This article is issued from Wikipedia - version of the 11/28/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.