What we're about

Welcome to the first official tinyML UK meetup group.

We will be building on the success of the tinyML meetup groups in the US and around the world, offering the opportunity for members of the machine learning and embedded software communities somewhere to share ideas and network.

• What is the purpose of the group?
To spread the word and educate the industry on "tinyML" (broadly defined as machine learning architectures, devices, techniques, tools and approaches capable of performing on-device analytics for a variety of sensing modalities--vision, audio, motion, environmental, human health monitoring etc.) at “mW” or below power range targeting predominately battery operated devices. The tinyML meetup group is an informal monthly gathering of researchers and practitioners working on various aspects of machine learning technologies (hardware-algorithms/networks- software-application) at the extreme low-power regime to share latest developments in this fast growing field and promote collaborations throughout the ecosystem. The format will be presentations with Q&A followed by networking.

• Who should join?
Experts in machine learning technologies at the edge, especially in the low power battery operated regime. This includes hardware architects, software engineers, systems engineers, ASIC designers, algorithms and application developers, low power sensor providers and end users. “Newbees”, i.e. people interested in joining this field and getting up to speed by listening start-of-the-art presentations and interacting with established players are very welcome to join too, both from the industry and the academia.

• What will you do at your events?
Communicate to the attendees the “latest and greatest” in tinyML by watching a presentation from a tinyML expert from the industry or the academia and interfacing with the member of the tinyML Community.

Upcoming events (1)

tinyML Talks by Rehan Hafiz from Information Technology University

Network event

Announcing tinyML Talks on November 16th, 2021

IMPORTANT: Please register here

Once registered, you will receive a link and dial in information to teleconference by email, that you can also add to your calendar.

7:00 AM - 8:00 AM Pacific Daylight Time (PDT)
Rehan Hafiz, Professor, Faculty of Engineering, Information Technology University (ITU)
"SuperSlash: Unifying Design Space Exploration and Model Compression methodology for design of deep learning accelerators for TinyML"

Deploying Deep Learning (DL) models on resource-constrained embedded devices is a challenging task. The limited on-chip memory on such devices results in increased off-chip memory access volume, thus limiting the size of DL models that can be efficiently realized in such systems. Sophisticated DSE (Design Space Exploration) schemes have been developed in the past to reduce the off-chip memory access volume. However, DSE alone cannot reduce the amount of off-chip memory accesses beyond a certain point due to the fixed model size. Model compression via pruning can be employed to reduce the size of the model and the associated off-chip memory accesses. However, we found that pruned models with even the same accuracy and model size may require a different number of off-chip memory accesses depending upon the pruning strategy adopted. Furthermore, the classical pruning schemes are not guided by the goals of DSE. In this talk we discuss SuperSlash, a unified solution for DSE and Model Compression. SuperSlash estimates off-chip memory access volume overhead of each layer of a deep learning model by exploring multiple design candidates. In particular, it evaluates multiple data reuse strategies for each layer, along with the possibility of layer fusion. Layer fusion aims at reducing the off-chip memory access volume by avoiding the intermediate off-chip storage of a layer's output and directly using it for processing of the subsequent layer. SuperSlash then guides the pruning process via a ranking function, which ranks each layer according to its explored off-chip memory access cost. The talk shall thus present a technique to jointly perform the pruning and DSE to fit in large DNN models on accelerators with low computational resources.

Ahmad, H., Arif, T., Hanif, M. A., Hafiz, R., & Shafique, M. (2020). SuperSlash: A unified design space exploration and model compression methodology for design of deep learning accelerators with reduced off-chip memory access volume. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 39(11),[masked].

Rehan Hafiz received his Ph.D. degree in Electrical Engineering from the University of Manchester, United Kingdom, in 2008. He is currently with Information Technology University (ITU), Lahore, as a Professor in the Faculty of Engineering. He founded and directed the Vision Processing Lab (VISpro) that focuses on areas like Vision System Design, Approximate Computing, Design of Application-Specific Hardware Accelerators, Deep Learning, FPGA based design, and applied image and video processing. Apart from several publications in these areas, he holds multiple patents in the US, South Korean, and Pakistan patent offices.

We encourage you to register earlier since on-line broadcast capacity may be limited.

Note: tinyML Talks slides and videos will be available on the tinyML website and tinyML YouTube Channel afterwards, for those who missed the live session. Please take a moment and subscribe to the YouTube channel today: https://www.youtube.com/tinyML?sub_confirmation=1


Past events (78)

tinyML Talks by Jan Jongboom from Edge Impulse

Online event

Photos (100)