• What we'll do
Most companies are faced with the ever-growing big data problem. How can IT professionals help business lines gather and process data from various sources? There have been two schools of thought when dealing with big data. Schema on write is represented by the traditional relational database. Raw data is ingested by an extract, transform and load (ETL) process. The data is stored in tables that enforce integrity and allow for quick retrieval. Only a small portion of the total data owned by the company resides in the database. Schema on read is represented by technologies such as Hadoop or PolyBase. These technologies assumed that data integrity was applied during the generation of the text files. The actual definition of the table is applied during the read operation. All the data owned by the company can reside in simple storage.
1 - Saving files to blob storage.
2 - How to bulk insert data into Azure DB
3 - Using PolyBase to load an Azure DW
John Miner has over twenty five years of data processing experience, and his architecture expertise encompasses all phases of the software project life cycle, including design, development, implementation, and maintenance of systems.
His credentials include undergraduate and graduate degrees in Computer Science from the University of Rhode Island. Before joining Microsoft, John won the Data Platform MVP award in 2014 and 2015 for his outstanding contributions to the SQL Server community.
• What to bring
• Important to know