Next Meetup

What's your algorithm doing and why? AND Visibility Graphs for Time Series
Talks Talk 1: Multiscale Visibility Graphs for Time Series Classification by Daoyuan LI (李道远) Abstract: TBD This talk presents a multiscale visibility graph representation for time series as well as feature extraction methods for time series classification (TSC). Unlike traditional TSC approaches that seek to find global similarities in time series databases (e.g., Nearest Neighbor with Dynamic Time Warping distance) or methods specializing in locating local patterns/subsequences (e.g., shapelets), we extract solely statistical features from graphs that are generated from time series. Specifically, we augment time series by means of their multiscale approximations, which are further transformed into a set of visibility graphs. After extracting probability distributions of small motifs, density, assortativity, etc., these features are used for building highly accurate classification models using generic classifiers (e.g., Support Vector Machine and eXtreme Gradient Boosting). Thanks to the way how we transform time series into graphs and extract features from them, we are able to capture both global and local features from time series. Based on extensive experiments on a large number of open datasets and comparison with five state-of-the-art TSC algorithms, our approach is shown to be both accurate and efficient: it is more accurate than Learning Shapelets and at the same time faster than Fast Shapelets. Speaker bio: Daoyuan Li is a research associate at the Interdisciplinary Center for Security, Reliability and Trust (SnT). He received his PhD degree in computer science from University of Luxembourg. He is an entrepreneur, researcher and engineer. He has worked on various domains, including networking protocols, IoT, fullstack software, machine learning and data mining of text, time series and financial data. Talk 2: Interpretable and Explainable AI: Just a Pipe Deam? by Christian Hammerschmidt Abstract: While the performance of machine learning systems has reached impressive new heights (mostly thanks to deep learning and more computing power), our understanding of machine-learned models and their decisions has not kept up the pace with these developments. In large parts, we still struggle to understand how our models work and why they predict or classify each sample the way they do. At the same time, regulatory requirements are demand more from anyone who wants to automate decision making based on data. A common example for this is the European Unions "Right to Explanation" under the GDPR regulation, enabling consumers "to obtain an explanation of a decision reached" without specifying what such an explanation actually looks like. Likewise, the desire for fairness, accountability, and transparency in automated decision making requires a solid grasp of the decision-making process---e.g. by being able to interpret or explain the machine learning system and its decisions. In my talk, I will briefly discuss different motivations for and concepts of "interpretability" and "explainability" of ML systems. I will showcase some recent work towards a better understanding of machine learning systems and outline its relevance for data science and conclude with some open problems we still face. Bio: Christian is a research associate at the SNT, working on problems in data science and machine learning. Find out more about him at http://chrishammerschmidt.de

SnT - Luxembourg University

29, Avenue J.F Kennedy · Luxembourg

    Past Meetups

    What we're about

    Public Group

    We are passionate about Data Science, Machine Learning and Big Data.

    Building a community of Data Scientists in Luxembourg.

    Members (1,533)

    Photos (46)

    Find us also at