- Changing the Game with Cloud Data Warehousing
Please register at this link: https://www.snowflake.com/event/changing-the-game-with-cloud-data-warehousing-emea-ireland/?utm_source=Sonra&utm_medium=email&utm_campaign=Changing Join Snowflake, Fivetran, Looker and Sonra on Tuesday October 1st at The Alex Hotel in Dublin for a deep-dive session to learn how you can escape the constraints of legacy technology and reinvent your data analytics with the data warehouse built for the cloud. Agenda: 9:00-9:30 AM: Registration & Breakfast 9:30-10:30 AM: Kent Graziano, Snowflake 10:30-10:45 AM: Fivetran 10:45-11:00 AM: Looker 11:00-11:30 AM: Uli Bethke, Sonra & Founder of ‘Hadoop User Group, Ireland’ 11:30AM: Event close
- Data for Breakfast Dublin
Please register for the event on this link: https://www.snowflake.com/data-for-breakfast/dublin/?utm_source=Sonra&utm_medium=referral&referredBy=Sonra-Marketing-Email To stay competitive in today’s world is to stay agile by making swift, intelligent, data-driven decisions. Join us at the Dublin stop on our Data for Breakfast tour to see how you can enable your organisation to be data-driven. You’ll leave with a playbook to: Escape the constraints of legacy technology. Deliver insight from all your data to all your business users. Create your own data sharing economy. DATA FOR BREAKFAST 8:15 AM Registration and Breakfast 9:00 AM Enabling Your Organisation to be Data-Driven with Snowflake 9:20 AM Sonra Intelligence: How Medvault helps clinics analyse medical data with Snowflake and Flexter In this customer case study, Sonra will walk you through the detailed design and architecture of Medvault’s data warehouse implementation on Snowflake. 9:45 AM Looker 10:00 AM Expert Q&A Panel & Networking
- Life with GDPR - From Governance to Optimisation
Our panel includes: John Keyes, Assistant Commissioner, Data Protection Commission Laura Bowmer, Head of Customer Engagement, Aston Martin Joe Madigan, Head of Customer Data and Retail Analytics Bank of Ireland Kate Colleary, Founder of Frontier Privacy and IAPP Country Leader for Ireland We will focus on what has happened since GDPR go-live last month and how this brave new world has impacted data management. Our panel will discuss the key challenges and opportunities we face under the new regulation, and what we can expect next. Location: Informatica Ireland 1 Windmill Lane SOBO District, Dublin 2 D02 F206 Dublin
- Cloud Data Warehousing and BI
Registration for the event is here: https://www.snowflake.net/0-snowflake-dublin/ Please join Sonra and Snowflake for a 90-minute workshop where we will show you how easy it is to get started with Snowflake cloud data warehouse. We will also demonstrate some of the cool features in Snowflake such as zero copy cloning, undrop , time travel and much more. See us in action generating insights from Irish property data, enriched with data from the 2011 census and data from Open Street Map.
- Everything you wanted to know about the DPO, but were afraid to ask!
DAMA Ireland are delighted to announce our next free-to-attend event. Join us on Thursday, March 29th in Bank of Ireland, Grand Canal Square, for a panel discussion on the Data Protection Officer under GDPR. We have assembled a panel of industry experts to discuss: The Data Protection Officer role The Data Protection Officer responsibilities pre- and post-GDPR go-live Which organisations need a Data Protection Officer Where the Data Protection Officer fits within an organisations data governance structure The Pros and Cons of outsourcing the Data Protection Officer Role The Data Protection Officer and the Data Protection Office Please join us for networking from 6pm with the event starting at 6.45pm 6:00pm – 6:45pm: Networking 6:45pm – 6:50pm: Welcome Remarks 6:50pm – 7:10pm: Panel 7:55pm – 8:00pm: Wrap Up
- Graph Databases. Just hype, or the end of the relational world
Graph Databases - Just hype, or the end of the relational world ? - Albert Godfrind, Spatial and Graph Expert - Oracle EMEA Modeling information as graphs is a natural and intuitive way for understanding complex relationships such as social networks or financial connections (Panama Papers). Graph databases abstract those relationships as nodes and links. They enable new powerful analytics using built-in graph algorithms and the PGQL language. Since 2015 Oracle offers a graph database with Oracle Spatial and Graph. The graphs can be stored in Oracle NoSQL, Apache HBase or an Oracle Database. The analytics are performed by an in-memory engine for optimal performance. In this paper we explain the fundamentals of property graph databases and highlight use cases where property graph implementations are superior to relational technologies. Besides looking into the architecture and query language, we show which kinds of applications can benefit from specific algorithms. We cover the conversion of relational tables to graph structures, as well as the visualization of property graph data using commercial and open source tools. And finally, we will look into a series of benchmarks based on typical datasets which have been conducted against similar technologies. Bio: The session will be covered by Albert Godfrind from Oracle Corporation. Albert has over 25 years of experience in designing, developing, and deploying IT applications. His interest and enthusiasm for spatial information and geographical information systems started at Oracle when he started using the spatial extensions of the Oracle database in 1998. Ever since, Albert has been evangelizing the use of spatial information to GIS and BI communities across Europe, consulting with partners and customers, speaking at conferences, and designing and delivering in-depth technical training. Albert is one of the authors of the first book on Oracle Spatial, "Pro Oracle Spatial - The essential guide to developing spatially enabled business applications"
- Google BigQuery for Data Warehousing. Raw to valuable data using Spark.
Raw to valuable data using Spark, Parquet and Python - Barry Sheridan, Data Scientist, Tenable This talk will cover Tenables approach for converting big, messy datasets into manageable, flat datasets using Spark, Parquet and Python. It will cover the workflow surrounding starting with a compressed, messy dataset and ending up with a flat clean dataset. Along the way we will use Python to show: - conversion of a raw dataset to Parquet files - application of aggregations to Parquet files with Spark - example analysis of aggregated output to find valuable information BigQuery and the evolution of data services at Google. Kirill Evreinov, Solution Engineer, Google In his presentation, Kirill will cover the evolution of Big Data Services at Google with a focus on BigQuery. He will contrast BigQuery to traditional data warehouse solutions and finish off his presentation with a BigQuery demo and a Q&A session. Tenable are the sponsors of the event. And guys there will be Food available. Doors open: 18:00 Presentations start: 18:30
- Let's talk Kafka Streams
Look Ma, no Code! Building Streaming Data Pipelines with Apache Kafka, Robin Moffatt, Technology Evangelist, Confluent Companies new and old are all recognising the importance of a low-latency, scalable, fault-tolerant data backbone, in the form of the Apache Kafka streaming platform. With Kafka, developers can integrate multiple sources and systems, which enables low latency analytics, event driven architectures and the population of multiple downstream systems. These data pipelines can be built using configuration alone. In this talk, we'll see how easy it is to stream data from a database such as Oracle into Kafka using the Kafka Connect API. In addition, we'll use KSQL to filter, aggregate and join it to other data, and then stream this from Kafka out into multiple targets such as Elasticsearch and MySQL. All of this can be accomplished without a single line of code! Why should Java geeks have all the fun? Highly Scalable Machine Learning in Real Time with Apache Kafka’s Streams API, Kai Wähner, Technology Evangelist, Confluent Intelligent real time applications are a game changer in any industry. This session explains how companies from different industries build intelligent real time applications. The first part of this session explains how to build analytic models with R, Python or Scala leveraging open source machine learning / deep learning frameworks like TensorFlow or H2O. The second part discusses the deployment of these built analytic models to your own applications or microservices by leveraging the Apache Kafka cluster and Kafka’s Streams API instead of setting up a new, complex stream processing cluster. The session focuses on live demos and teaches lessons learned for executing analytic models in a highly scalable, mission-critical and performant way. Free Kafka T-shirts and stickers.