Moving data between systems can require many steps: from copying data, to moving it from an on-premise location into the cloud, to reformatting it or joining it with other data sources. Each of these steps needs to be done, and usually requires separate software.
A data pipeline is the sum of all these steps, and its job is to ensure that these steps all happen reliably to all data.
ABOUT THIS COURSE
In this mini course we will learn how to utilize AWS DynamoDB, Kinesis and Lambda in order to drive a highly event driven data pipeline. While technologies exist like Hadoop and Spark, with a select combination of AWS services you will learn how to create a highly scalable, dependable and responsive data pipeline for your application.
Together we will learn how to manage AWS Lambdas and a complete AWS environment configuration with Terraform.
You will learn:
- How to deploy and version AWS Lambda functions with IAC
(infrastructure as code) patterns.
- AWS Lambda error handling
- DynamoDB autoscaling
- And much more
This event is sponsored by Motor Trend