What we're about

When I first entered the world of Big Data as an engineer, I didn’t realize what an enormous technological challenges I will be facing. As time went by, despite my experience, I’m still struggling to keep up with new technologies, frameworks and architectures.

Consequently, I created this meetup, detailing all the challenges of Big Data, especially in a cloud environment . I am using AWS, GCP and Data Center infrastructure to answer the technical questions of anyone navigating their way in the big data world.

In this meetup we will try to answer questions regarding: big data best practice, data science, data engineering, BI, how to manage costs, performance tips, security tips and Cloud best practices.

We shall present lecturers working on several cloud vendors, various technologies and frameworks, and startups working on data products. Basically - if it is related to data - this is THE meetup.

Some of our online materials (mixed content from several cloud vendor):







You tube channels:




Data Engineers
Data Science
DevOps Engineers
Big Data Architects
Solution Architects

Upcoming events (3)

Keep your data encrypted in BigQuery

Link visible for attendees

Lecturer: Ran Tibi
Lecture Language: English
Session description:
If you work with data and build a data warehouse you probably have some sensitive data that you want to keep secured.
While BigQuery encrypts all the data before it is written to the disk, once you have read access to the tables you can have full visibility of the data, including sensitive data and PII.
In this talk, we will describe a use case that shows how you can create a secure end-to-end process that encrypts at the application level the data before inserting it into BigQuery and allow users to decrypt it only in query time without the need of knowing the actual encryption key.
Speaker Biography:
Tech lover, A software engineer for over 15 years, with more than 6 years in the data field.
In the last 2 years, I am working as a Data consultant and CTO as a service, I implement many DWH architectures and Data pipelines.

Data Mesh: Experimentation to Industrialisation

Link visible for attendees

Discover what happened when a large financial service organisation who were already underway with a DevOps and Agile transformation went from a Monolithic Data Lake architecture, onto a federated self-service Data Mesh on Google Cloud Platform (GCP).

The key driver from the transformation was to reduce Lead times and improve the Flow Efficiency for Business Change. The typical approaches to transformation demonstrated substantial efficiencies across the core operational platforms but no material impact was seen on the downstream Data Publishing and Data Analytics platforms. These were faced with more fundamental blockers around lack of autonomy, monolithic architecture and proxy ownership of the data, compounded by legacy tech estate of on-prem data warehouses, data marts, data lakes, etc. End to end solutions required coordination between specialised teams working in silos leading to extended lead times.

This required a paradigm shift on both the systems architecture and Ways of Working.

In this session, we’ll explore the key driving principles for the Data Mesh from MVP, to productionisation to industrialisation.

The Data Mesh was built to be an Open Self Service platform whereby the various tenants can contribute to the features themselves alongside using the Core Platform self-service features. The success of the Data Mesh led to buy-in across the business and the Data Mesh Adoption accelerated exponentially. During the talk, we’ll highlight some of the key outcomes and business value delivered through the Data Mesh including:

  • Rapid business values delivered to many ongoing programmes building ML models, MI Dashboards, cross-domain analytics, Data Provider APIs, Enquire and Reporting apps, etc.
  • Teams were able to react to fast changing business and client demand with lead times dropping from months to days.
  • New business models identified
  • The Data Mesh brought parity across the varying levels of technology maturity and skills within the organisation

The Data Mesh is now a de facto part of the downstream data publishing, reporting and analytics for the organisation.

Who should watch?
Anyone who wants to understand how Data Mesh can help businesses achieve their organisational objectives.

What you'll learn?

  • What is Data Mesh.
  • The key driving principles.
  • How the hyper-new concept delivers business value.
  • How Data Mesh works across different programmes.

Lecture Langauge: English
Sunny Jaisinghani- Data Mesh Platform Owner
Simon Massey- Data Mesh Lead Technologist

High-Performance, Low-Latency Database Architecture

Link visible for attendees

17:00 GMT+2 , Gathering & Networking
17:05 Lecture
17:30 Q&A
Lecturer: Guy Shtub, Head of Training at ScyllaDB
Language: English
In this talk, I'll speak about modern, distributed, high-performance databases. I'll cover topics like architecture, consistency, high availability, replication, and scaling. As an example, I'll use ScyllaDB however the concepts hold for Apache Cassandra as well as other Column Family databases based on the BigTable paper published in 2006.
About the lecturer:
Guy Shtub is Head of Training at ScyllaDB and holds a B.SC. degree in Software Engineering from Ben Gurion University. He co-founded two start-ups and is experienced in creating products that people love.