Thirty-sixth Conference on Neural Information Processing Systems

New Orleans, LA

Main Conference Papers

 

Benign Overfitting in Two-layer Convolutional Neural Networks
Yuan Cao*, Zixiang Chen*, Mikhail Belkin and Quanquan Gu, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. Oral Presentation [arXiv]

Computationally Efficient Horizon-Free Reinforcement Learning for Linear Mixture MDPs
Dongruo Zhou and Quanquan Gu, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. Oral Presentation[arXiv]

Risk Bounds of Multi-Pass SGD for Least Squares in the Interpolation Regime
Difan Zou*, Jingfeng Wu*, Vladimir Braverman, Quanquan Gu and Sham M. Kakade, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. [arXiv]

The Power and Limitation of Pretraining-Finetuning for Linear Regression under Covariate Shift
Jingfeng Wu*, Difan Zou*, Vladimir Braverman, Quanquan Gu and Sham M. Kakade, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. [arXiv]

Nearly Optimal Algorithms for Linear Contextual Bandits with Adversarial Corruptions
Jiafan He, Dongruo Zhou, Tong Zhang and Quanquan Gu, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. [arXiv]

Learning Two-Player Mixture Markov Games: Kernel Function Approximation and Correlated Equilibrium
Chris Junchi Li*, Dongruo Zhou*, Quanquan Gu and Michael I. Jordan, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. [arXiv]

A Simple and Provably Efficient Algorithm for Asynchronous Federated Contextual Linear Bandits
Jiafan He*, Tianhao Wang*, Yifei Min*, Quanquan Gu, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. [arXiv]

Towards Understanding the Mixture-of-Experts Layer in Deep Learning
Zixiang Chen, Yihe Deng, Yue Wu, Quanquan Gu and Yuanzhi Li, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. [arXiv]

Active Ranking without Strong Stochastic Transitivity
Hao Lou, Tao Jin, Yue Wu, Pan Xu, Quanquan Gu and Farzad Farnoud, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022.

DC-BENCH: Dataset Condensation Benchmark. Justin Cui, Ruochen Wang, Si Si, Cho-Jui Hsieh, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. 

End-to-End Learning to Index and Search in Large Output Spaces. Nilesh Gupta, Patrick CHen, Hsiang-Fu Yu, Cho-Jui Hsieh, Inderjit S Dhillon, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. 

Random Sharpness-Aware Minimization Yong Liu, Siqi Mai, Minhao Cheng, Xiangning Chen, Cho-Jui Hsieh, Yang You, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. 

Are AlphaZero-like Agents Robust to Adversarial Perturbations? Li-Cheng Lan, Huan Zhang, Ti-Rong Wu, Meng-Yu Tsai, I-Chen Wu, Cho-Jui Hsieh, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. 

Efficient Non-Parametric Optimizer Search for Diverse Tasks Ruochen Wang, Yuanhao Xiong, Minhao Cheng, Cho-Jui Hsieh, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. 

Efficient Frameworks for Generalized Low-Rank Matrix Bandit Problems. Yue Kang, Cho-Jui Hsieh, Thomas Lee, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. 

An Efficient Framework for Computing Tight Lipschitz Constants of Neural Networks. Zhouxing Shi, Yihan Wang, Huan Zhang, J Zico Kolter, Cho-Jui Hsieh, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. 

Syndicated Bandits: A Framework for Auto Tuning Hyper-parameters in Contextual Bandit Algorithms. Qin Ding, Yue Kang, Yi-Wei Liu, Thomas Lee, Cho-Jui Hsieh, James Sharpnack, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. 

General Cutting Planes for Bound-Propagation-Based Neural Network Verification. Huan Zhang, Shiqi Wang, Kaidi Xu, Linyi Li, Bo Li, Suman Jana, Cho-Jui Hsieh, J Zico Kolter, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. 

Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks
Sitan Chen, Aravind Gollakota, Adam Klivans, Raghu Meka, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. 

Sketching based Representations for Robust Image Classification with Provable Guarantees
Nishanth Dikkala, Sankeerth Rao Karingula, Raghu Meka, Jelani Nelson, Rina Panigrahy, Xin Wang, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. 

Lower Bounds on Randomly Preconditioned Lasso via Robust Sparse Designs
Jonathan Kelner, Frederic Koehler, Raghu Meka, Dhruv Rohatgi, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. 

Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attack
Tian Yu Liu · Yu Yang · Baharan Mirzasoleiman

On Leave-One-Out Conditional Mutual Information For Generalization
Mohamad Rida Rammal, Alessandro Achille, Aditya Golatkar, Suhas Diggavi, Stefano Soatto, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. 

Semi-supervised Vision Transformers at Scale
Zhaowei Cai, Avinash Ravichandran, Paolo Favaro, Manchen Wang, Davide Modolo, Rahul Bhotika, Zhuowen Tu, Stefano Soatto, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. 

Improving Multi-Task Generalization via Regularizing Spurious Correlation
Ziniu Hu, Zhe Zhao, Xinyang Yi, Tiansheng Ya,  Lichan Hong, Yizhou Sun, Ed Chi, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. 

Sparse Probabilistic Circuits via Pruning and Growing
Meihua Dang, Anji Liu, Guy Van den Broeck, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. 

RankFeat: Rank-1 Feature Removal for Out-of-distribution Detection
Yue Song, Nicu Sebe, Wei Wang, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. 

Collaborative Learning by Detecting Collaboration Partners
Shu Ding, Wei Wang, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. 

CryptoGCN: Fast and Scalable Homomorphically Encrypted Graph Convolutional Network Inference
Ran Ran, Wei Wang, Quan Gang, Jieming Yin, Nuo Xu, Wujie Wen, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. 

Instability and Local Minima in GAN Training with Kernel Discriminators
Evan Becker, Parthe Pandit, Sundeep Rangan, Alyson Fletcher, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. 

Improving Transformer with an Admixture of Attention Heads
Tan Nguyen, Tam Nguyen, Hai Do, Khai Nguyen, Vishwanath Saragadam, Minh Pham, Khuong Duy Nguyen, Nhat Ho, Stanley Osher, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. 

FourierFormer: Transformer Meets Generalized Fourier Integral Theorem
Tan Nguyen, Minh Pham, Tam Nguyen, Khai Nguyen, Stanley Osher, Nhat Ho, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. 

Improving Neural Ordinary Differential Equations with Nesterov’s Accelerated Gradient Method
Ho Huu Nghia Nguyen, Tan Nguyen, Huyen Vo, Stanley Osher, Thieu Vo, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. 

Causal Inference with Non-IID Data using Linear Graphical Models

Chi Zhang, Karthika Mohan,  Judea Pearl, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. 

The Role of Baselines in Policy Gradient Optimization

Jincheng Mei, Wesley Chung, Valentin Thomas, Bo Dai, Csaba Szepesvari, Dale Schuurmans, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. 

Synthetic Model Combination: An Instance-wise Approach to Unsupervised Ensemble Learning 

Alex Chan, Mihaela van der Schaar, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. 

Composite Feature Selection Using Deep Ensembles

Fergus Imrie, Alexander Norcliffe, Pietro Lió, Mihaela van der Schaar, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. 

Data-IQ: Characterizing subgroups with heterogeneous outcomes in tabular data

Nabeel Seedat, Jonathan Crabbé, Ioana Bica, Mihaela van der Schaar, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. 

Transfer Learning on Heterogeneous Feature Spaces for Treatment Effects Estimation

Ioana Bica, Mihaela van der Schaar, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. 

Online Decision Mediation

Daniel Jarrett, Alihan Hüyük, Mihaela van der Schaar,in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. 

Benchmarking Heterogeneous Treatment Effect Models through the Lens of Interpretability

Jonathan Crabbé, Alicia Curth, Ioana Bica, Mihaela van der Schaar, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022. 

Concept Activation Regions: A Generalized Framework For Concept-Based Explanations

Jonathan Crabbé, Mihaela van der Schaar, in Proc. of Advances in Neural Information Processing Systems (NeurIPS) 35, New Orleans, LA, USA, 2022.