About PI

Professor Zhangyang “Atlas” Wang [Google Scholar] is a tenured Associate Professor and holds the Temple Foundation Endowed Faculty Fellowship #7, in the Chandra Family Department of Electrical and Computer Engineering at The University of Texas at Austin. He is also a faculty member of UT Computer Science (GSC) [CSRankings], and the Oden Institute CSEM program. Meanwhile, in a part-time role, he serves as the Director of AI Research & Technology for Picsart, where he leads the development of cutting-edge, GenAI-powered tools for creative visual editing. He was the Jack Kilby/Texas Instruments Endowed Assistant Professor in the same department from 2020 to 2023. During 2021 - 2022, he also held a visiting researcher position at Amazon Search. From 2017 to 2020, he was an Assistant Professor of Computer Science and Engineering, at the Texas A&M University. He received his Ph.D. degree in ECE from UIUC in 2016, advised by Professor Thomas S. Huang; and his B.E. degree in EEIS from USTC in 2012.

Prof. Wang has broad research interests spanning from the theory to the application aspects of machine learning (ML). At present, his core research mission is to leverage, understand and expand the role of low dimensionality in ML and optimization, whose impacts span over many important topics such as: efficient scaling, training and inference of large language models (LLMs); robustness and trustworthiness; learning to optimize (L2O); and generative vision. His research is gratefully supported by NSF, DARPA, ARL, ARO, IARPA, DOE, as well as dozens of industry and university grants. Prof. Wang co-founded the new Conference on Parsimony and Learning (CPAL) and serves as its inaugural Program Chair. He is an elected technical committee member of IEEE MLSP and IEEE CI; and regularly serves as (senior) area chairs, invited speakers, tutorial/workshop organizers, various panelist positions and reviewers. He is an ACM Distinguished Speaker and an IEEE senior member.

Prof. Wang has received many research awards, including an NSF CAREER Award, an ARO Young Investigator Award, an IEEE AI's 10 To Watch Award, an INNS Aharon Katzir Young Investigator Award, a Google Research Scholar award, an IBM Faculty Research Award, a J. P. Morgan Faculty Research Award, an Amazon Research Award, an Adobe Data Science Research Award, a Meta Reality Labs Research Award, and two Google TensorFlow Model Garden Awards. His team has won the Best Paper Award from the inaugural Learning on Graphs (LoG) Conference 2022; and has also won five research competition prizes from CVPR/ICCV/ECCV since 2018. He feels most proud of being surrounded by some of the world's most brilliant students: his Ph.D. students include winners of seven prestigious fellowships (NSF GRFP, IBM, Apple, Adobe, Amazon, Qualcomm, and Snap), among many other honors.

About Our Research

At VITA group, we have unusually broad, and forever-evolving research interests spanning from the theory to the application aspects of machine learning (ML). Our current "research keywords" include, but are not limited to: sparsity (from classical optimization to modern neural networks); efficient training, inference or transfer (especially, of large language models); neural scaling law; robustness and trustworthiness; learning to optimize (L2O); generative AI; graph learning, and more. Below, we describe a few organized themes that are driving our group's latest efforts.

Theme 1: Efficient and Scalable Learning through Intrinsic Low Dimensionality

Intelligence stems from a straightforward truth: although the world may seem complex and high-dimensional, it is more organized and predictable than it initially appears. The recent advent of large language models (LLMs) has especially led some to posit that information compression is a fundamental learning objective of an intelligent system. That connects to classical neuroscience ideas on compression as a guiding principle for the brain representing the sensory data of the world, and prompts us to understand LLMs from a parsimonious modeling perspective. Moreover, remarkable advances in LLMs are achieved by increasing computational budget, training data, and model size. Their explosive costs demand more efficient training and serving paradigms. The pursuit of learning modern-scale foundation models (FMs) efficiently, by seeking their inherent parsimony, occupies a central place in our research. We have contributed many well-recognized works to laying theoretical foundations for sparse neural networks’ efficiency, optimization, and generalization; and to demonstrating their empirical promise from TinyML to large foundation model applications (see our short handbook for sparse NN researchers). Our latest projects are centered at tackling efficient scaling, training, and inference of LLMs and generative vision models, with system or hardware co-design.

Selected Notable Works:
  • J. Zhao, Z. Zhang*, B. Chen, Z. Wang, A. Anandkumar, and Y. Tian, "GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection”, arXiv:2403.03507, 2024. [Paper] [Code]
  • A. Jaiswal*, S. Liu*, T. Chen*, Z Wang, "The Emergence of Essential Sparsity in Large Pre-trained Models: The Weights that Matter”, Advances in Neural Information Processing Systems (NeurIPS), 2023. [Paper] [Code]
  • Z. Zhang*, Y. Sheng, T. Zhou, T. Chen*, L. Zheng, R. Cai*, Z. Song, Y. Tian, C. Ré, C. Barrett, Z. Wang, and B. Chen, "H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models”, Advances in Neural Information Processing Systems (NeurIPS), 2023. [Paper] [Code]
  • T. Chen*, Z. Zhang*, A. Jaiswal*, S. Liu*, and Z. Wang, "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers”, International Conference on Learning Representations (ICLR), 2023. (Spotlight) [Paper] [Code]
  • T. Chen*, J. Frankle, S. Chang, S. Liu, Y. Zhang, Z. Wang, and M. Carbin, “The Lottery Ticket Hypothesis for Pre-trained BERT Networks”, Advances in Neural Information Processing Systems (NeurIPS), 2020. [Paper] [Code]

  • Theme 2: Designing, Understanding, and Scaling New Architectures

    We are devoted to studying emerging model families that promise to become future “universal” workhorses or "foundational models": two such examples are transformers (especially LLMs) and graph neural networks, and many projects here are owing to our close collaboration with industry leaders. We are meanwhile enthusiastic about AutoML & neural scaling law, on both consolidating its theoretical underpinnings ("why choosing this model, not that one?") and broadening its practical applicability ("what more can be automated, and how to do it better?"). State-of-the-art ML systems consist of complex pipelines with multiplied design choices. We see AutoML & neural scaling law as a central hub in addressing those design challenges; it also proves to be a powerful scientific tool for understanding many ad-hoc choices of network architectures or hyperparameters (often aided by the deep learning theory).

    Selected Notable Works:
  • W. Chen*, J. Wu*, Z. Wang, and B. Hanin, "Principled Architecture-aware Scaling of Hyperparameters”, International Conference on Learning Representations (ICLR), 2024. [Paper] [Code]
  • P. Wang*, R. Panda, L. Hennigen, P. Greengard, L. Karlinsky, R. Feris, D. Cox, Z. Wang, and Y. Kim, "Learning to Grow Pretrained Models for Efficient Transformer Training”, International Conference on Learning Representations (ICLR), 2023. (Spotlight) [Paper] [Code]
  • P. Wang*, R. Panda, and Z. Wang, “Data Efficient Neural Scaling Law via Model Reusing”, International Conference on Machine Learning (ICML), 2023. [Paper] [Code]
  • P. Wang*, W. Zheng*, T. Chen*, and Z. Wang, “Anti-Oversmoothing in Deep Vision Transformers via the Fourier Domain Analysis: From Theory to Practice”, International Conference on Learning Representations (ICLR), 2022. [Paper] [Code]
  • W. Chen*, X. Gong*, and Z. Wang, “Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective”, International Conference on Learning Representations (ICLR), 2021. [Paper] [Code]

  • Theme 3: Generative AI for 2D/3D Visual Synthesis and Editing

    Our group's earlier (pre-2021) work includes several influential algorithms for image enhancement and editing “in the wild,” many of which are based on Generative Adversarial Networks (GANs). We pioneered a few innovative GAN architectural designs (TransGAN, DeblurGAN-v2, EnlightenGAN, AutoGAN) that are now widely adopted by the community. More recently (post-2021), we have steered our focus to two new areas: (i) 3D reconstruction and novel view synthesis, via advanced tools such as Neural Radiance Fields (NeRF) and 3D Gaussian Splatting; (2) the new generation of multi-modality GenAI, leveraging the latest workhorse of diffusion models (text2image, text2video, text-to-3D, etc.)

    Selected Notable Works:
  • Z. Fan*, K. Wang*, K. Wen, Z. Zhu*, D. Xu*, and Z. Wang, "LightGaussian: Unbounded 3D Gaussian Compression with 15x Reduction and 200+ FPS”, arXiv:2311.17245, 2023. [Paper] [Code]
  • D. Xu*, Y. Jiang*, P. Wang*, Z. Fan*, Y. Wang*, and Z. Wang, "NeuralLift-360: Lifting An In-the-wild 2D Photo to A 3D Object with 360◦ Views”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023. (Highlight) [Paper] [Code]
  • L. Khachatryan, A. Movsisyan, V. Tadevosyan, R. Henschel, Z. Wang, S. Navasardyan, and H. Shi, "Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators”, IEEE International Conference on Computer Vision (ICCV), 2023. (Oral) [Paper] [Code]
  • M. Varma*, P. Wang*, X. Chen*, T. Chen*, S. Venugopalan, and Z. Wang, "Is Attention All That NeRF Needs?”, International Conference on Learning Representations (ICLR), 2023. [Paper] [Code]
  • D. Xu*, Y. Jiang*, P. Wang*, Z. Fan*, H. Shi, and Z. Wang, “SinNeRF: Training Neural Radiance Field on Complex Scenes from a Single Image”, European Conference on Computer Vision (ECCV), 2022. [Paper] [Code]

  • Theme 4: Machine Learning for Good (Robustness, Privacy, Fairness, & AI4Science)

    As ML systems (in particular, computer vision and LLM) are influencing all facets of our daily life, it is now commonplace to see evidence on their untrustworthiness or harmful impacts in high-stake environments. We have strived to build ML algorithms that are resilient to various perturbations, attacks, biases, as well as rising challenges in privacy, fairness and ethics - as overviewed in our ML Safety Primer. We are also dedicated to collaborating closely with domain experts to advance AI4Science, particularly in the fields of biomedicine, bioinformatics, and healthcare, as well as fostering AI for Social Good (our Good Systems project)

    Selected Notable Works:
  • J. Hong*, J. Wang, C. Zhang, Z. LI*, B. Li, and Z. Wang, "DP-OPT: Make Large Language Model Your Differentially-Private Prompt Engineer”, International Conference on Learning Representations (ICLR), 2024. (Spotlight) [Paper] [Code]
  • G. Holste*, E. Oikonomou, B. Mortazavi, A. Coppi, K. Faridi, E. Miller, J. Forrest, R. McNamara, L. Ohno-Machado, N. Yuan, A. Gupta, D. Ouyang, H. Krumholz, Z. Wang, and R. Khera, “Severe Aortic Stenosis Detection by Deep Learning Applied to Echocardiography”, European Heart Journal (EHJ), 2023. [Paper] [Code]
  • T. Chen*, C. Gong, D. Diaz, X. Chen*, J. Wells, Q. Liu, Z. Wang, A. Ellington, A. Dimakis, and A. Klivans, "HotProtein: A Novel Framework for Protein Thermostability Prediction and Editing”, International Conference on Learning Representations (ICLR), 2023. [Paper] [Code]
  • H. Wang*, C. Xiao, J. Kossaifi, Z. Yu, A. Anandkumar, and Z. Wang, “AugMax: Adversarial Composition of Random Augmentations for Robust Training”, Advances in Neural Information Processing Systems (NeurIPS), 2021. [Paper] [Code]
  • Z. Wu*, H. Wang*, Z. Wang, H. Jin, and Z. Wang, “Privacy-Preserving Deep Action Recognition: An Adversarial Learning Framework and A New Dataset”, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2020. [Paper] [Code]

  • Theme 5: Learning to Optimize (L2O)

    L2O is an emerging paradigm that leverages ML to automatically develop an optimization algorithm. It demonstrates many practical benefits including faster convergence and better solution quality. Over the past five years, we have spearheaded an ever-growing line of L2O works that significantly expand both rigorous theories (L2O convergence, worst-case/average-case generalization, adaptation, uncertainty quantification, and interpretability), and practical adoption (inverse problems in computational sensing/imaging, large model training, private training, protein docking, AI for finance, among others). Please refer to the L2O Primer and Open L2O toolbox that we presented for this community.

    Selected Notable Works:
  • J. Yang, T. Chen*, M. Zhu*, F. He, D. Tao, Y. Liang, and Z. Wang, "Learning to Generalize Provably in Learning to Optimize”, International Conference on Artificial Intelligence and Statistics (AISTATS), 2023. [Paper] [Code]
  • (α-β) T. Chen*, X. Chen*, W. Chen*, H. Heaton, J. Liu, Z. Wang, and W. Yin, “Learning to Optimize: A Primer and A Benchmark”, Journal of Machine Learning Research (JMLR), 2022. [Paper] [Code]
  • W. Zheng*, T. Chen*, T. Hu*, and Z. Wang, “Symbolic Learning to Optimize: Towards Interpretability and Scalability”, International Conference on Learning Representations (ICLR), 2022. [Paper] [Code]
  • J. Liu, X. Chen*, Z. Wang, and W. Yin, “ALISTA: Analytic Weights Are As Good As Learned Weights in LISTA”, International Conference on Learning Representations (ICLR), 2019. [Paper] [Code]

  • Prospective Students Shall Read More...

    Sponsor