|10:00 - 10:15||Opening by organizers (prof.dr. Henri Bal and dr. Lin Wang)|
|10:15 - 11:00||
Keynote by prof.dr. Koen Langendoen (TU Delft)
Title: Continuous sensing with intermediate power; look ma, no edge computing needed! [slides]
Abstract: In this talk I will present some recent work on green sensing on energy harvesting devices, which are prone to frequent power outages. To minimize dependency on exteral infrastructure, multiple autonmous devices coordinate their actions to arrive at an ensemble capable of continuously executing a sensing task despite their individual intermittent operation. The feasibility of this approach was shown with a pilot application for voice-command recognition.
Bio: Prof.dr. Koen Langendoen received a Ph.D. in computer science from the Universiteit van Amsterdam in 1993. Subsequently, he worked as a post-doctorate researcher at the Vrije Universiteit, Amsterdam (1993-1997) and Delft University of Technology (1998-2000) before joining Delft as a member of academic staff in 2001. Since 2008 Koen Langendoen is affiliated as full professor of computer science with Delft University of Technology. He holds the chair on Embedded and Networked Systems managing a group of about 15 research staff (including 5 assistant/associate professors and 10+ PhD students and postdocs) and 20+ MSc students. Prof. Langendoen has rich experience and an excellent track record in systems research, in particular, wireless networking protocols. He has participated as principal and co-principal investigator in numerous national (Dutch) and EU research projects, including D2S2, COMMIT, RELATE, WISEBED, CONET, and RELYonIT. Prof. Langendoen shares his expertise with industry by giving seminars and master classes at companies like Alten, CapGemini, and Nyenrode Business School.
|11:00 - 11:10||Break|
|Session 1 - Edge and IoT|
|11:10 - 11:40||
Spike-based neuromorphic computing for the edge [slides]
dr. Federico Corradi (imec)
|11:40 - 12:10||
Video stream analytics at the edge in collaboration with the cloud(-let) [slides]
Vinod Nigade (VU Amsterdam)
|12:10 - 12:30||Lunch break|
|Session 2 - Edge AI|
|12:30 - 13:00||
Machine learning for non-linear signal processing in communications [slides]
dr. Alexios Balatsoukas-Stimming (TU/e)
|13:00 - 13:30||
Edge computing for Deep Learning – a practical perspective [slides]
dr. Klamer Schutte (TNO)
|13:30 - 13:40||Break|
|Session 3 - Edge Platform (Chair: dr. Alexios Balatsoukas-Stimming)|
|13:40 - 14:10||
Magnetic resonance AI Edge computing for workflow and quality improvement [slides]
dr. Henkjan Huisman (Radboud UMC)
|14:10 - 14:40||
Towards an “operating system” for edge computing [slides]
dr. Lin Wang (VU Amsterdam)
|14:40 - 15:00||
From security camera to secure, smart IoT edge sensor [slides]
André Bos (Bosch Security & Safety Things)
|15:00 - 15:05||Closing|
Title: Spike-based neuromorphic computing for the edge
Abstract: “The development of brain-inspired neuromorphic computing architectures as a paradigm for Artificial Intelligence (AI) at the edge is a candidate solution that can meet strict energy and cost reduction constraints in the Internet of Things (IoT) application areas. Towards this goal, at IMEC, we are architecting, developing, and testing fully event-driven without clock architecture, with co-located memory and processing capability that exploits event-based processing to reduce an always-on system’s overall energy consumption (µW dynamic operation). Our architectures take advantage of the high integration offered by Complementary Metal Oxide Semiconductor (CMOS) technologies. Neuromorphic devices are ideal for re-trainable sensor ICs. They can perform various signal processing tasks such as data preprocessing, dimensionality reduction, feature selection, and application-specific inference.This talk will present two instantiations of our neuromorphic spike-based architecture, called µBrain, and benchmarked in two edge applications. First, I will showcase an ultra-low-power (<100uW) radar-based gesture recognition. Secondly, I will show our recent effort on ultra-low-power (<40uW) electrocardiogram anomaly detection. µBrain is tin, fully digital, spike-based, and parallel, and non-Von-Neumann. It enables always-on neuromorphic computing in Internet Of Things (IoT) sensory nodes that require running on battery power for years.”
Bio: Dr. Federico Corradi received his Ph.D. degree from the University of Zurich in 2015 in Neuroinformatics and an international Ph.D. from the ETH Neuroscience Centre Zurich also in 2015. His research activities are at the interface of neuroscience and neuromorphic engineering. His research focuses on a new generation of computing technologies for IoT devices based on bio-inspired neural signal processing. He was a Postgraduate at the institute of Neuroinformatic in 2018. From 2015 to 2018 he was with a neuromorphic start-up (IniLabs, now IniVation), working on event-based cameras and neuromorphic processors. Since 2018, he joined IMEC, the Netherlands. His current position is Senior Research Scientist, and he is leading the development of the ultra-low-power neuromorphic IC design. His research focuses are energy-efficient neural network implementations for IoT and healthcare applications. He currently serves as a technical program committee of several machine learning and neuromorphic symposiums and conferences (ICONS, DSD, EUROMICRO).
Title: Video stream analytics at the edge in collaboration with the cloud(-let)
Abstract: The increased accuracy of DNN-based video analytics comes at the cost of increased computational complexity due to their deep and complex architectures, which might hinder the real-time objective, for example, processing 30 FPS. Therefore, the question that we try to answer is, ‘how to design a system that supports real-time and online video analytics?’ More specifically, in our first paper, we have worked on the question, ‘how and where to deploy deep learning models to identify human actions in real-time on live video streams?’ We proposed a hybrid system that leverages the benefits (fast response and high accuracy) of both the system by deploying a smaller model on the edge and a bigger model in the cloud. Currently, we are working on designing an edge analytics system in a multi-client setting to guarantee latency SLO without degrading the analytics accuracy, when DNN models are accessed over a variable communication network.
Bio: I am Vinod Nigade, a Ph.D. student at VU Amsterdam working under the guidance of Prof. Henri Bal and Lin Wang. I am a part of the EDL video analytics project together with Schiphol airport. My research focus is on the design and implementation of efficient systems/services for video stream analytics, especially at the edge of the network.
Title: Machine learning for non-linear signal processing in communications
Abstract: The field of machine learning has seen tremendous advances in the past few years, largely due to the abundant (centralized) processing power and the availability of vast amounts of data that enable effective training of deep neural networks. The main motivation for using machine learning comes from that fact that in some areas, such as image recognition, constructing models that are elegant, tractable, and practically useful is nearly impossible. The prototypical edge computing field of signal processing for communications, however, is traditionally built on precise mathematical models that are well understood and have been shown to work exceptionally well for many practical applications. Unfortunately, the ever-increasing throughput and efficiency demands have forced communications systems designers to push the boundaries to such an extent that in many applications conventional mathematical models and signal processing techniques either have high implementation complexity or are no longer sufficient to accurately describe the encountered scenarios. This is where machine learning methods can come to the rescue as they do not require rigid pre-defined models and can extract meaningful structure from data in order to provide useful practical results. In this talk, I will describe several applications of machine learning techniques for signal processing in communications. In particular, I will first talk about the suitability of neural networks for non-linear signal processing tasks in the context of self-interference cancellation for full-duplex communications as well as digital predistortion of power amplifier non-linearities. I will then explain the concept of deep unfolding and I will present its application to self-interference cancellation for full-duplex communications and to 1-bit precoding in massive MIMO systems.
Bio: Dr. Alexios Balatsoukas-Stimming is currently an Assistant Professor at the Eindhoven University of Technology in the Netherlands and an Adjunct Assistant Professor at Rice University in the USA. He received the Diploma and MSc degrees in Electronics and Computer Engineering from the Technical University of Crete, Chania, Greece, in 2010 and 2012, respectively, and a PhD in Computer and Communications Sciences from the École polytechnique fédérale de Lausanne (EPFL), Switzerland, in 2016. He then spent one year at the European Laboratory for Particle Physics (CERN) as a Marie Skłodowska-Curie postdoctoral fellow, he was a postdoctoral researcher in the Telecommunications Circuits Laboratory at EPFL from 2018 to 2019, as well as a visiting postdoctoral researcher at the University of California Irvine and at Cornell University in 2018 and 2019, respectively. His research interests include VLSI circuits for communications, error-correction coding theory and practice, as well applications of approximate computing and machine learning to signal processing for communications.
Title: Edge computing for Deep Learning – a practical perspective
Authors: Klamer Schutte, Nicolas Boehrer, Michel van Lier (TNO Intelligent Imaging)
Abstract: A challenge in Deep Learning applications is that over the years the growth-rate of the amount of pixels in cameras outpaces the growth-rate of available radio-based communication bandwidth. Especially for remote sensing platforms such as UAVs this means that the amount of image data generated is too large to transmit all information to a central location with sufficient processing power. To address this challenge, we consider edge processing at the platform with the sensor, allowing processing to determine what data is of sufficient importance to transmit. At the same time this will facilitate the processing to take place of uncompressed data, potentially offering better sensitivity compared to a central processing solution. In our domain of interest this processing includes object detection, tracking and classification. Typically data selected for transmission includes tracks of interest as selected by location, behavior, and object class, as well as small crops of related image data. We will present our work on the security demonstrator of the CAVIAR project. It consists of a system embedding a 65 Mpix camera and processing on an NVIDIA Jetson AGX Xavier, fitting the foreseen SWAP-C requirements of a quadcopter drone. Our work in CAVIAR includes pre-filtering, object detection, and video encoding.
Bio: Klamer Schutte studied physics at UvA and graduated at NIKHEF-H in 1989. He received in 1994 his PhD at University Twente on his thesis Knowledge based recognition of man-made objects. After a stay as Post-Doc at Delft University of Technology In the Pattern Recognition group he started at TNO in 1996. His current position is Lead Scientist in the TNO Intelligent Imaging department. Within EDL he is chairman of the Industrial Advisory Board.
Title: Magnetic resonance AI Edge computing for workflow and quality improvement
Abstract: MRI allows excellent soft tissue contrast but can be slow and prone to artefacts. Integrated AI can help improve speed and image quality. Example applications are automatic image interpretation that drives the MRI to visually track a catheter tip in real-time for fast and more accurate positioning; and real-time image quality scoring to allow an operator to redo a scan sequence for improved quality.
Bio: Henkjan Huisman is associate professor AI for medical imaging at the Radboud University Medical Center, The Netherlands. He has over 30 years of experience in scientific research, prototyping , and clinical validation of medical imaging AI. His research team explores and uses AI to better understand disease, and improve diagnosis, and therapy in the field of abdominal ultrasound and MRI.
Title: Towards an “operating system” for edge computing
Abstract: Edge computing aims to provide better support for real-time applications such as mobile augmented reality and video stream analytics in close proximity of the end-devices by deploying computing resources at the edge of the network. Although many efforts have been made to engage various applications with edge computing, most of the solutions are domain-specific and closely tied to the target application. In this talk, I will argue for the need of a general-purpose edge computing platform which provides the necessary support for all potential edge applications. I will highlight the challenges from the perspective of an edge “operating system”: programming environments and resource management, and discuss some of the work we have been doing in these directions.
Bio: Lin Wang is an Assistant Professor at VU Amsterdam, The Netherlands and an Adjunct Professor at TU Darmstadt, Germany. Before he joined VU Amsterdam, he was an Athene Young Investigator of TU Darmstadt. He obtained his PhD in Computer Science from the Institute of Computing Technology, Chinese Academy of Sciences and held positions in IMDEA Networks Institute, Spain, and SnT Luxembourg. His research is focused on programming abstractions and resource management for modern distributed systems including edge AI inference serving systems, in-network computing, and cyber-physical systems.
Title: From security camera to secure, smart IoT edge sensor
Abstract: One of the trends in the security landscape is the increase in the use of artificial intelligence and machine learning. This can be seen, among other things, in the use of artificial intelligence in security cameras. Partly due to COVID-19, this is now taking on an extra dimension by using it to count how many people are present in a certain area and whether the mandatory protective equipment is being used. The use of contactless applications will also increase through the use of AI in cameras. In order to apply this in a usable and scalable way in the future, powerful edge devices are needed in addition to an open platform. After all, open platforms help us to innovate more quickly and make these solutions available to the general public. And powerful edge devices ensure the correct and rapid analysis of situations. The camera as an edge device immediately provides this in a data-safe manner in accordance with the applicable GDPR rules. In short, the security camera grow into a safe and smart IoT Edge device.
Bio: André Bos is senior business development manager at Security & Safety Things. Working from the Eindhoven location with its headquarters in Munich, André has been working for Security & Safety Things as International Business Development Manager since January 2020. He has gained experience in selling technical hardware solutions in the B2B environment for the past 33 years, of which more than 20 years in security with a strong interest in IoT and everything called ‘smart’. Successfully managed sales teams in the Netherlands and drove business with existing and new solutions, establishing and expanding new distribution channels and partner network.
@ 2021 EDL Edge Workshop. All rights reserved.