Speaker: Hongfei Cao
ABSTRACT: Training and deploying large scale Machine Learning and Deep Learning models are often not straightforward to data scientists. Compared to traditional R/Python modeling, distributed model training and deployment often require new alternative algorithm designs, additional coding and performance tuning (a lot). In this session, you will learn how to we will share our experiences in building large scale distributed Machine Learning models and serving it on the cloud and on-premise using Big Data platform Spark MLlib and Google Cloud Platform. BIO: Hongfei Cao is a lead specialist big data engineer and data scientist at KPMG Lighthouse Data & Analytics team. He works on large-scale machine learning and data mining problems using scalable distributed systems include Google Cloud and Hadoop/Spark. His interests among others are: cognitive computing, distributed system and NoSQL databases. Hongfei holds a PhD degree in computer science in the area of Big Data analytics. He has extensive knowledge in Big Data and Machine Learning domains and design/develop many advanced Big Data ETL and analytics applications for clients mainly using Hadoop and Spark. In 2017, he presented DeepLearning work in Google Cloud NEXT conference using TensorFlow and Google Cloud Machine Learning platform and obtained Google Cloud Data Engineer certification. Hongfei is always passionate about cutting edge technologies especially on Big Data, Machine Learning, and Cloud Computing.
Hosted by Professor Carlo Zaniolo
Date(s) - Oct 31, 2017
4:15 pm - 5:45 pm