News

[Jul. 2022]
  • 8 ECCV'22 (SinNeRF + INS + scalable L2O + point cloud MAE + few-shot align + malleable convolution + universal ViT + turbulence) accepted
  • 1 TMLR (adversarial augmentation) accepted
  • VITA Ph.D. student Xiaohan Chen graduated and joined Alibaba Damo Academy (Decision Intelligence Lab), Seattle, as a full-time senior research scientist ("Ali Star" hire)
  • Our group co-organized the 2nd workshop on Sparsity in Neural Networks: Advancing Understanding and Practice (SNN)
[Jun. 2022]
  • Dr. Wang is grateful to receive the NSF CAREER Award
  • Dr. Wang is grateful to receive the Aharon Katzir Young Investigator Award of International Neural Network Society (INNS)
  • 1 ACM Computing Surveys (ML safety) accepted
  • 1 TMLR (robust lifelong learning) accepted
  • 3 ACM MM'22 (controllable light enhancement + video action detection + Cloud2Sketch) accepted
  • 1 ACM BCB'22 (pretraining for COVID-19 prediction) accepted
  • 1 MLHC'22 (personalized imbalanced training) accepted
  • Our group co-organized the CVPR 2022 Workshop and Challenge on Bridging Computational Photography and Visual Recognition (UG2+)
  • Our group co-organized the AICAS 2022 tutorial: IEEE Low-Power Computer Vision Challenge. A similar tutorial would be offered in DAC 2022
[May. 2022]
  • 9 ICML'22 (long-tail OoD detection + BN-free robust training + neural implicit dictionary + improved sparse training + linearity grafting + structured lottery ticket + double-win lottery ticket + renormalization group theory + NN architecture growing) accepted
  • 1 AutoML-Conf'22 (GNN NAS) accepted
[Apr. 2022]
  • 1 JMLR (learning to optimize) accepted
  • 1 IEEE Trans. PAMI (deep GCN training benchmark) accepted
  • 1 ACM FAccT'22 (data efficiency under differential privacy) accepted
  • We thank IEEE Computer Magazine [article] for covering our recent success in building energy-efficient computer vision systems
[Mar. 2022]
  • We thank Quanta Magazine [article] for covering our NeurIPS'21 work, TransGAN
  • We thank National Science Foundation (NSF) news [article] for covering our training-free NAS works, in ICLR'21 (TE-NAS) and ICLR'22 (As-ViT)
[Feb. 2022]
  • 7 CVPR'22 (ViT training + Aug-NeRF + symbolic spotting + sparsity for Trojan + sparse multi-tasking + video SR + fashion CLIP) accepted
  • 1 IEEE Trans. Image Processing (light enhancement with noise) accepted
[Jan. 2022]
  • 14 ICLR'22 (symbolic L2O + disguised subnetwork + optimizer amalgamation + robust sparsity + Fourier ViT + auto-scaling ViT + ViT compression + Frank-Wolfe pruning + cold brew + audio lottery + split-max federated learning + sparse ensembling + random sparse training + Bayesian L2O) accepted
  • 1 AISTATS'22 (variational feature selection) accepted
  • 2 ICASSP'22 (lifelong speech synthesis + sensor data imputation) accepted
  • 1 ACM Trans. DAES (efficient segmentation) accepted
  • VITA Ph.D. student Tianlong Chen is selected to receive a 2022 academic year Adobe PhD Research Fellowship - that is after him being awarded IBM PhD Fellowship 2021, UT Graduate Dean’s Prestigious Fellowship 2021, and Baidu Scholarship finalist
  • VITA welcomes four new Ph.D. students: Xuxi Chen, Dejia Xu, Hongru Yang and Zhangheng Li
[Dec. 2021]
  • People in this group did nothing this month but enjoyed some well-deserved vacation time, with their families and loved ones
[Nov. 2021] [Oct. 2021]
  • 1 WSDM'22 (graph contrastive learning) accepted
  • 1 IEEE Trans. SPIN (vision-based drone swarm control) accepted
  • 3 WACV'22 (video NAS + sandwich BatchNorm + chest X-rays) accepted
  • Our group is selected to be supported by the Google TensorFlow Model Garden Award
  • Our group co-organized the ICCV 2021 Workshop on Real-world Recognition from Low-quality Inputs (RLQ)
  • VITA Postdoctoral Researcher Dr. Guoliang Kang joins the CS department of University of Technology Sydney, Australia, as Lecturer (Assistant Professor) and endowed with the prestigious DECRA Fellow
  • We thank Henry for making a very cool video [Youtube] highlighting our latest work, AugMax (NeurIPS'21) [Paper] [Code]
[Sep. 2021]
  • 13 NeurIPS'21 (TransGAN + AugMax + data-efficient GAN + elastic lottery ticket + SViTE + DePT + stealing lottery + imbalanced contrastive learning + HyperLISTA + WeakNAS + IA-RED2 + neuroregeneration + lottery ticket benchmark) accepted
  • Our group won the 1st prize of IEEE 2021 Low-Power Computer Vision Challenge (video track) [Solution] [Tech Report]
  • 1 ICCV'21 workshop (graph CNN for motion prediction) accepted
[Aug. 2021]
  • VITA welcomes seven new Ph.D. students: Zhiwen Fan, Scott Hoang, Peihao Wang, Shixin Yu, Greg Holste, Ajay Jaiswal and Everardo Olivares-Vargas
  • VITA's August 2021 squad of students graduated: Dr. Zhenyu Wu joined Wormpex AI Research (Seattle) as Research Scientist, Scott continues as VITA Ph.D. student, and Rahul joined ByteDance AI Lab (Silicon Valley) as Machine Learning Engineer. Congratulations!
  • 1 IEEE Trans. Image Processing (sketch-to-image synthesis) accepted
  • 1 Springer Machine Learning (weakly-supervised segmentation troubleshooting) accepted
[Jul. 2021] [Jun. 2021] [May. 2021]
  • 5 ICML'21 (graph contrastive learning + homotopy attack + imbalanced contrastive learning + graph lottery ticket + data-efficient lottery ticket) accepted
  • 1 KDD'21 (federated learning debiasing) accepted
  • 1 ACL'21 (EarlyBERT) accepted
  • Our group is selected to be supported by the NVIDIA Applied Research Accelerator Program
[Apr. 2021]
  • VITA Ph.D. student Tianlong Chen is selected to receive a 2021 academic year IBM PhD Fellowship -- that is after Tianlong being selected as a Baidu Scholarship 2021 finalist
  • 2 CVPR'21 workshop (CNN high-frequency bias + BN-free training of binary networks) accepted
[Mar. 2021]
  • 1 ICME'21 (arbitrary style transfer) accepted
  • We thank UT Engineering News [article] for highlighting our group's research
  • VITA welcomes Dr. Guoliang Kang to join us as Postdoctoral Researcher
[Feb. 2021]
  • 3 CVPR'21 (CV lottery ticket + blind image IQA + assessing image enhancement) accepted
  • 1 IJCV (open-world generalization of segmentation models) accepted
  • 1 DAC'21 + 1 ICASSP'21 accepted (InstantNet + VGAI)
  • We thank Yannic for making a very cool video [Youtube] highlighting our latest work, TransGAN [Paper] [Code]
[Jan. 2021]
  • 8 ICLR'21 (lifelong lottery ticket + robust overfitting + nasty knowledge distillation + theory-guided NAS + domain generalization + LISTA unrolling + learning to optimize for minimax + efficient recommendation system) accepted
  • 1 IEEE Trans. PAMI (artistic text style transfer) accepted
  • We thank IDG Connect [article] for covering our ICLR'20 work, Early Bird Lottery Ticket for Effcient Deep Learning
  • Our group co-organized the IJCAI 2020 BOOM Workshop
  • VITA welcomes new Ph.D. student Wenqing Zheng (Spring 2021 - )
[Dec. 2020] [Nov. 2020] [Oct. 2020]
  • Our three popular image enhancement algorithms: AOD-Net (ICCV 2017), EnlightenGAN (IEEE TIP 2020), and DeblurGAN-V2 (ICCV 2019), are included into the open-source GNU Image Manipulation Program toolbox (GIMP-ML), as deep-dehazing, enlighten, and deblur plugins
[Sep. 2020]
  • Dr. Wang is grateful to receive the 2020 Adobe Data Science Research Award
  • 8 NeurIPS'20 (learning to optimize + once-for-all adversarial training + BERT lottery ticket + meta learning + robust contrastive learning + graph contrastive learning + ShiftAddNet + efficient quantized training) accepted
  • 1 IEEE Trans. PAMI (privacy-preserving visual recognition) accepted
[Aug. 2020] [Jul. 2020]
  • 3 ECCV'20 (GAN compression + sketch-to-image synthesis + on-device learning-to-optimize) accepted
  • 1 ACM Multimedia'20 (MMHand synthesizer) accepted
  • 1 InterSpeech'20 (AutoSpeech) accepted
  • We thank Tech Xplore [article] for covering our work, adversarial 3-D logos
[Jun. 2020]
  • 6 ICML'20 (domain generalization + noisy label training + self-supervised GCN + DNN optimization + GAN compression + NAS for Bayesian models) accepted
  • Our group co-organized the CVPR 2020 UG2+ Workshop and Prize Challenge
[May. 2020]
  • 1 IEEE Trans. PAMI (image enhancement for visual understanding) accepted
  • 1 IEEE Trans. Mobile Computing (adaptive model compression) accepted
[Apr. 2020]
  • Dr. Wang is grateful to receive the 2020 ARO Young Investigator Award (YIP)
  • Dr. Wang is grateful to receive the 2020 IBM Faculty Research Award
  • Dr. Wang is grateful to receive the 2020 Amazon Research Award (AWS AI)
  • 2 CVPR'20 workshop (efficient triplet loss + fine-grained classification) accepted
[Mar. 2020]
  • 1 ISCA'20 (algorithm-hardware co-design to reduce data movement) accepted
[Feb. 2020]
  • 3 CVPR'20 (self-supervised adversarial robustness + fast GCN training + indoor scene reasoning) accepted
  • 1 IEEE Trans. Image Processing (visual understanding in poor-visibility environments) accepted
[Jan. 2020]
  • 1 AISTATS'20 (CNN uncertainty quantification) accepted
  • 1 IEEE Trans. CSVT (GAN data augmentation) accepted

Research Interests

[A] As Goals -- Enhancing Deep Learning Robustness, Efficiency, and Privacy

We seek to build deep learning solutions that are way beyond just data-driven accurate predictors. In our opinion, an ideal model shall at least: (1) be robust to various perturbations, distribution shifts, and adversarial attacks; (2) be efficient for both inference and training, including resource-efficient, data-efficient, and label-efficient; and (3) be designed to respect individual privacy and fairness.

[B] As Toolkits -- AutoML, and and Emerging New Models

We are enthusiastic about AutoML, on both consolidating its theoretical underpinnings and broadening its practical applicability. State-of-the-art ML systems consist of complex pipelines, with various design choices. We consider AutoML to be a powerful tool and a central hub, in addressing those design challenges faster and better. Our recent work focuses on the data-driven discovery of model architectures (i.e., neural architecture search) and training algorithms (a.k.a. “learning to optimize”). We are meanwhile devoted to studying uprising ML models that show to be potentially “universal” workhorses, such as transformers and graph neural networks.

[C] As Applications -- Computer Vision and Interdisciplinary Problems

We are interested in a broad range of computer vision problems, ranging from low-level (e.g, image reconstruction, enhancement, and generation) to high-level topics (e.g., recognition, segmentation, and vision for UAV/autonomous driving). We are also growingly interested in several interdisciplinary fields, such as biomedical informatics, geoscience, IoT and cyber-physical systems.

Prospective students