DATA has become a critical asset for many organizations and continues to transform how business is done. Big data is driving the decision making for many organizations by providing up-to-date analytics. Acquiring, cleaning, storing, publishing, securing, and making this data available for consumption are critical parts of the data journey. Data Lakes are repository facilitating developers, data scientists, and analysts to store data of any size, shape, and speed, and execute all types of processing and analytics across platforms and languages. It removes the complexities of ingesting and storing all of your data while making it faster to get ‘up and running’ with batch, streaming, and interactive analytics. Most data lakes are created with ELT(Extract, Transform, Load) methodology where clean raw data is available before transformation for consumption purposes.
As recently as 5 years ago, the major focus of IT was around big ERP applications such as SAP, SalesForce and Oracle Financials. With DATA Lakes, the enterprises are rethinking their strategy. Data Lakes will also have significant impact on enterprise integration as Lakes could become data providers for consumption. Should Data be the core of the IT asset and apps be developed around the data ingestion and consumption?
This meetup will focus on building Data Lakes, ingestion, consumption, architecture, security and tools to build the Lakes. We will explore various topics related with Data Lakes and integrations.