2020 International Conference On Computer Aided Design

The Premier Conference Devoted to Technical Innovations in Electronic Design Automation

November 2-5, 2020VIRTUAL CONFERENCE

MP Associates, Inc.
THURSDAY November 05, 8:15am - 4:30pm | Slot 4
EVENT TYPE: WORKSHOP
SESSION 4W
Workshop on Hardware and Algorithms for Learning On-a-chip (HALO) 2020

Speakers:
Mike Davies - Intel Corp.
Nathan McDonald - Air Force Research Lab
Hai (Helen) Li - Duke Univ.
Travis Dewolf - Applied Brian Research
Deming Chen - Univ. of Illinois at Urbana-Champaign
Yiyu Shi - Univ. of Notre Dame
Yanzhi Wang - Northeastern Univ.
Eriko Nurvitadhi - Intel Corp.
Priya Panda - Yale Univ.
Organizers:
Qinru Qiu - Syracuse Univ., Syracuse, NY
Yingyan Lin - Rice Univ.
Chenchen Liu - Univ. of Maryland

In recent years, machine/deep learning algorithms has unprecedentedly improved the accuracies in practical recognition and classification tasks, some even surpassing human-level accuracy. While significant progresses have been made on accelerating the models for real-time inference on edge and mobile devices, the training of the models largely remains offline on server side. State-of-the-art learning algorithms for deep neural networks (DNN) imposes significant challenges for hardware implementations in terms of computation, memory, and communication. This is especially true for edge devices and portable hardware applications, such as smartphones, machine translation devices, and smart wearable devices, where severe constraints exist in performance, power, and area.

There is a timely need to map the latest complex learning algorithms to custom hardware, in order to achieve orders of magnitude improvement in performance, energy efficiency and compactness. Exemplary efforts from industry and academia include many application-specific hardware designs (e.g., xPU, FPGA, ASIC, etc.). Recent progress in computational neurosciences and nanoelectronic technology, such as emerging memory devices, will further help shed light on future hardware-software platforms for learning on-a-chip. At the same time new learning algorithms need to be developed to fully explore the potential of the hardware architecture.

The overarching goal of this workshop is to explore the potential of on-chip machine learning, to reveal emerging algorithms and design needs, and to promote novel applications for learning. It aims to establish a forum to discuss the current practices, as well as future research needs in the aforementioned fields.

Key Topics

  • Synaptic plasticity and neuron motifs of learning dynamics
  • Computation models of cortical activities
  • Sparse learning, feature extraction and personalization
  • Deep learning with high speed and high power efficiency
  • Hardware acceleration for machine learning
  • Hardware emulation of brain
  • Nanoelectronic devices and architectures for neuro-computing
  • Applications of learning on a smart mobile platform

View all details on the workshop at:  https://iccad-halo.github.io/