Meng Wang

Meng Wang

Associate Professor, Electrical, Computer, and Systems Engineering

Research Expertise
Theory and applications of machine learning and artificial intelligence
Research

Our research develops sample-efficient and computationally inexpensive learning methods for deep neural networks with provable generalization guarantees.

Publications

Li, H., Wang, M., Liu, S., Chen, P-Y., Xiong, J. (2022) Generalization Guarantee of Training Graph Convolutional 2. Networks with Graph Topology Sampling, in Proc. of 2022 International Conference on Machine Learning (ICML), July 2022.

Zhang, S., Wang, M., Liu, S., Chen, P-Y., Xiong, J. (2022) How Does Unlabeled Data Improve Generalization in Self-training? A one-hidden-layer Theoretical Analysis, in Proc. the Tenth International Conference on Learning Representations (ICLR), April 2022.

Zhang, S., Wang, M., Liu, S., Chen, P-Y., Xiong, J. (2021) Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Sparse Neural Networks, in Proc. of the Thirty-fifth Conference on Neural Information Processing Systems (NeurIPS), December 2021.

Wang, R., Xu, K.,Liu, S., Chen, P-Y., Weng, T-W., Gan, C., Wang, M. (2021) On fast adversarial robustness adaptation in model-agnostic meta-learning.  in Proc. of International Conference on Learning Representations (ICLR), Virtual, May 2021.  

Zhang, S., Wang, M., Liu, S., Chen, P-Y., Xiong, J. (2020) Fast Learning of Graph Neural Networks with Guaranteed Generalizability: One-hidden-layer Case, in Proc. of 2020 International Conference on Machine Learning (ICML) (acceptance rate: 21.8%)

Back to top