CS 201: Rethinking Hardware Accelerators for Deep Neural Networks, GEORGE A. CONSTANTINIDES, Imperial College London

Speaker: George A. Constantinides
Affiliation: Imperial College London

ABSTRACT:

We will explore the idea of efficiency in hardware accelerators for deep neural networks. In these accelerators, the directed graph describing a neural network can be implemented as a directed graph describing a Boolean circuit. We make this observation precise, leading naturally to an understanding of practical neural networks as discrete functions, and show that so-called binarised neural networks are functionally complete. In general, our results suggest that it is valuable to consider Boolean circuits as neural networks, leading to the question of which circuit topologies are promising. We argue that continuity is central to generalisation in learning, explore the interaction between data coding, network topology, and node functionality for continuity, and pose some open questions. As a first step to bridging the gap between continuous and Boolean views of neural network accelerators, we present some recent results from our work on LUTNet, a novel Field-Programmable Gate Array inference approach.

BIO:

 Professor George A. Constantinides holds the Royal Academy of Engineering / Imagination Technologies Chair in Digital Computation at Imperial College London, where he leads the Circuits and Systems research group. He has been a member of staff at Imperial since 2001. Over this time he has been the proud supervisor of 25 graduated PhD students and chaired the FPGA, FPT and FPL conferences.

Hosted by Professor Jason Cong

Date/Time:
Date(s) - Nov 07, 2019
4:15 pm - 5:45 pm

Location:
3400 Boelter Hall
420 Westwood Plaza Los Angeles California 90095