CS 201: Transformers for Neural Machine Translation and Beyond, DAVID KAVALER, Sandia National Labs

Speaker: David Kavaler
Affiliation: Sandia National Labs

 

ABSTRACT:

In this talk, I will present a brief overview of neural sequence models (also known as sequence-to-sequence models), used as a general method to model input and output sequences. The ability to model sequences is widely desired across myriad domains, spanning from biology to natural language and procedural animation. I will describe one such model – the transformer – in detail with discussion on example pitfalls in training. Finally, I will discuss an example application of these models used at Sandia for analyzing source code.

BIO:

David Kavaler is a Senior Member of the Technical Staff in the Data Science and Cyber Analytics department at Sandia National Laboratories in Livermore, California. His current research interests include natural language processing, specifically neural machine translation, and general applications of deep learning methods in cybersecurity. David received his Ph.D. in Computer Science from UC Davis in 2018 with a focus on applications of various data analysis techniques across mixed sets of Software Engineering data.

Hosted by Professor Peter Reiher

Date/Time:
Date(s) - Oct 22, 2019
4:15 pm - 5:45 pm

Location:
3400 Boelter Hall
420 Westwood Plaza Los Angeles California 90095