Professor Zhangyang “Atlas” Wang [Google Scholar] is a tenured Associate Professor and holds the Temple Foundation Endowed Faculty Fellowship #7, in the Chandra Family Department of Electrical and Computer Engineering at The University of Texas at Austin. He is also a faculty member of UT Computer Science [CSRankings], and the Oden Institute CSEM program. Since May 2024, Dr. Wang has been on leave from UT Austin to serve as the full-time Research Director for XTX Markets, heading the newly established AI Lab in New York City. In this role, he leads groundbreaking efforts at the intersection of algorithmic trading and deep learning, driving the development of robust, scalable AI algorithms to extract predictive insights from massive datasets.
Previously, he was the Jack Kilby/Texas Instruments Endowed Assistant Professor in the same department from 2020 to 2023; and an Assistant Professor of Computer Science and Engineering at Texas A&M University from 2017 to 2020. Alongside his academic career, he has also explored multiple exciting opportunities in the industry. He was a visiting scholar at Amazon Search from 2021 to 2022, leveraging geometric deep learning for recommendation systems. Later, he took on the (part-time) role of Director of AI Research & Technology for Picsart from 2022 to 2024, where he led the company’s ambitious initiative in video generative AI. He earned his Ph.D. in Electrical and Computer Engineering from UIUC in 2016, under the guidance of Professor Thomas S. Huang, and his B.E. in EEIS from USTC in 2012.
Prof. Wang has broad research interests in machine learning (ML) and optimization. Currently, his research passion centers on developing the theoretical and algorithmic foundations of generative AI and neurosymbolic AI. He emphasizes low-dimensional, modular representations that enable efficient and reliable learning in overparameterized model spaces while bridging the gap to symbolic reasoning over discrete structures such as logical dependencies, causal relationships, and geometric invariants. These principles underpin efforts to enhance the efficiency and trustworthiness of large language models (LLMs), advance planning and reasoning capabilities, and foster innovations in 3D/4D computer vision. His research is gratefully supported by NSF, DARPA, ARL, ARO, IARPA, DOE, as well as dozens of industry and university grants. Prof. Wang co-founded the new Conference on Parsimony and Learning (CPAL) and served as its inaugural Program Chair. He regularly serves as conference (senior) area chairs, journal editors, invited speakers, tutorial/workshop organizers, various panelist positions and reviewers. He is an ACM Distinguished Speaker and an IEEE senior member.
Prof. Wang has received many research awards, including an NSF CAREER Award, an ARO Young Investigator Award, an IEEE AI's 10 To Watch Award, an AI 100 Top Thought Leader Award, an INNS Aharon Katzir Young Investigator Award, a Google Research Scholar award, an IBM Faculty Research Award, a J. P. Morgan Faculty Research Award, an Amazon Research Award, an Adobe Data Science Research Award, a Meta Reality Labs Research Award, and two Google TensorFlow Model Garden Awards. His team has won the IEEE SPS Young Author Best Paper Award 2024, the Best Paper Award at the inaugural Learning on Graphs (LoG) Conference 2022, the Best Paper Finalist Award at the International Conference on Very Large Databases (VLDB) 2024, and five competition prizes at CVPR/ICCV/ECCV. He feels most proud of being surrounded by some of the world's most brilliant students: his Ph.D. students include winners of eight prestigious fellowships (NSF GRFP, Apple, NVIDIA, Adobe, IBM, Amazon, Qualcomm, and Snap), among many other honors.
At the VITA group, we pursue cutting-edge research spanning the theoretical foundations to practical applications of machine learning (ML). Our group's research continues to evolve, embracing new challenges at the forefront of AI and ML. We collaborate closely with industry partners and other academic institutions to ensure our work has real-world impact and addresses pressing technological needs.
Our current work is organized around three key themes, throughout which we maintain a commitment to developing ML algorithms that are efficient, scalable, and robust. We also explore the broader implications of our work, including applications in robotics, healthcare, and AI for social good.
We focus on advancing the efficiency, scalability and trust of LLMs through innovative approaches to training and inference. Our research explores memory-efficient LLM training techniques (GaLoRe & LiGO), efficient generative inference methods (H2O & Flextron), understanding pre-trained model weights (essential sparsity & lottery ticket) or training artifacts (oversmoothening & LLM-PBE): many accompanied with system or hardware co-design.
Selected Notable Works:Our research in this theme focuses on developing novel optimization techniques for modern machine learning challenges. We have spearheaded the Learning to Optimize (L2O) framework (LISTA-CPSS & ALISTA) and benchmark (L2O Primer), and recently explore the new frontiers in black-box LLM optimization (DP-OPT) and neurosymbolic AI (formal fine-tuning, symbolic L2O, neurosymbolic visual RL, & neurosymbolic uncertainty).
Selected Notable Works:Our group's earlier (pre-2021) work includes several influential algorithms for GAN-based image enhancement and editing “in the wild”. More recently (post-2021), we push the boundaries of generative AI for visual tasks, with a focus on 3D/4D reconstruction (LSM, InstantSplat, LightGaussian, & NeuralLift-360), novel view synthesis (GNT & SinNeRF), and video generation (StreamingT2V & Text2Video-Zero).
Selected Notable Works: