News

[Nov. 2021] [Oct. 2021]
  • 1 WSDM'22 (graph contrastive learning) accepted
  • 1 IEEE Trans. SPIN (vision-based drone swarm control) accepted
  • 3 WACV'22 (video NAS + sandwich BatchNorm + chest X-rays) accepted
  • Our group is selected to be supported by the Google TensorFlow Model Garden Award
  • Our group co-organized the ICCV 2021 Workshop on Real-world Recognition from Low-quality Inputs (RLQ)
  • We thank Henry for making a very cool video [Youtube] highlighting our latest work, AugMax (NeurIPS'21) [Paper] [Code]
[Sep. 2021]
  • 13 NeurIPS'21 (TransGAN + AugMax + data-efficient GAN + elastic lottery ticket + SViTE + DePT + stealing lottery + imbalanced contrastive learning + HyperLISTA + WeakNAS + IA-RED2 + neuroregeneration + lottery ticket benchmark) accepted
  • Our group won the 1st prize of IEEE 2021 Low-Power Computer Vision Challenge (video track) [Solution]
  • 1 ICCV'21 workshop (graph CNN for motion prediction) accepted
[Aug. 2021]
  • VITA welcomes seven new Ph.D. students: Zhiwen Fan, Scott Hoang, Peihao Wang, Shixin Yu, Greg Holste, Ajay Jaiswal and Everardo Olivares-Vargas
  • VITA's August 2021 squad of students graduated: Dr. Zhenyu Wu joined Wormpex AI Research (Seattle) as Research Scientist, Scott continues as VITA Ph.D. student, and Rahul joined ByteDance AI Lab (Silicon Valley) as Machine Learning Engineer. Congratulations!
  • 1 IEEE Trans. Image Processing (sketch-to-image synthesis) accepted
  • 1 Springer Machine Learning (weakly-supervised segmentation troubleshooting) accepted
[Jul. 2021] [Jun. 2021] [May. 2021]
  • 5 ICML'21 (graph contrastive learning + homotopy attack + imbalanced contrastive learning + graph lottery ticket + data-efficient lottery ticket) accepted
  • 1 KDD'21 (federated learning debiasing) accepted
  • 1 ACL'21 (EarlyBERT) accepted
  • Our group is selected to be supported by the NVIDIA Applied Research Accelerator Program
[Apr. 2021]
  • VITA Ph.D. student Tianlong Chen is selected to receive a 2021 academic year IBM PhD Fellowship -- that is after Tianlong being selected as a Baidu Scholarship 2021 finalist
  • 2 CVPR'21 workshop (CNN high-frequency bias + BN-free training of binary networks) accepted
[Mar. 2021]
  • 1 ICME'21 (arbitrary style transfer) accepted
  • We thank UT Engineering News [article] for highlighting our group's research
  • VITA welcomes Dr. Guoliang Kang to join us as Research Associate
[Feb. 2021]
  • 3 CVPR'21 (CV lottery ticket + blind image IQA + assessing image enhancement) accepted
  • 1 IJCV (open-world generalization of segmentation models) accepted
  • 1 DAC'21 + 1 ICASSP'21 accepted (InstantNet + VGAI)
  • We thank Yannic for making a very cool video [Youtube] highlighting our latest work, TransGAN [Paper] [Code]
[Jan. 2021]
  • 8 ICLR'21 (lifelong lottery ticket + robust overfitting + nasty knowledge distillation + theory-guided NAS + domain generalization + LISTA unrolling + learning to optimize for minimax + efficient recommendation system) accepted
  • 1 IEEE Trans. PAMI (artistic text style transfer) accepted
  • We thank IDG Connect [article] for covering our ICLR'20 work, Early Bird Lottery Ticket for Effcient Deep Learning
  • Our group co-organized the IJCAI 2020 BOOM Workshop
  • VITA welcomes new Ph.D. student Wenqing Zheng (Spring 2021 - )
[Dec. 2020] [Nov. 2020] [Oct. 2020]
  • Our three popular image enhancement algorithms: AOD-Net (ICCV 2017), EnlightenGAN (IEEE TIP 2020), and DeblurGAN-V2 (ICCV 2019), are included into the open-source GNU Image Manipulation Program toolbox (GIMP-ML), as deep-dehazing, enlighten, and deblur plugins
[Sep. 2020]
  • Dr. Wang is grateful to receive the 2020 Adobe Data Science Research Award
  • 8 NeurIPS'20 (learning to optimize + once-for-all adversarial training + BERT lottery ticket + meta learning + robust contrastive learning + graph contrastive learning + ShiftAddNet + efficient quantized training) accepted
  • 1 IEEE Trans. PAMI (privacy-preserving visual recognition) accepted
[Aug. 2020] [Jul. 2020]
  • 3 ECCV'20 (GAN compression + sketch-to-image synthesis + on-device learning-to-optimize) accepted
  • 1 ACM Multimedia'20 (MMHand synthesizer) accepted
  • 1 InterSpeech'20 (AutoSpeech) accepted
  • We thank Tech Xplore [article] for covering our work, adversarial 3-D logos
[Jun. 2020]
  • 6 ICML'20 (domain generalization + noisy label training + self-supervised GCN + DNN optimization + GAN compression + NAS for Bayesian models) accepted
  • Our group co-organized the CVPR 2020 UG2+ Workshop and Prize Challenge
[May. 2020]
  • 1 IEEE Trans. PAMI (image enhancement for visual understanding) accepted
  • 1 IEEE Trans. Mobile Computing (adaptive model compression) accepted
[Apr. 2020]
  • Dr. Wang is grateful to receive the 2020 ARO Young Investigator Award (YIP)
  • Dr. Wang is grateful to receive the 2020 IBM Faculty Research Award
  • Dr. Wang is grateful to receive the 2020 Amazon Research Award (AWS AI)
  • 2 CVPR'20 workshop (efficient triplet loss + fine-grained classification) accepted
[Mar. 2020]
  • 1 ISCA'20 (algorithm-hardware co-design to reduce data movement) accepted
[Feb. 2020]
  • 3 CVPR'20 (self-supervised adversarial robustness + fast GCN training + indoor scene reasoning) accepted
  • 1 IEEE Trans. Image Processing (visual understanding in poor-visibility environments) accepted
[Jan. 2020]
  • 1 AISTATS'20 (CNN uncertainty quantification) accepted
  • 1 IEEE Trans. CSVT (GAN data augmentation) accepted

Research Interests

[A] As Goals -- Enhancing Deep Learning Robustness, Efficiency, and Privacy

We seek to build deep learning solutions that are way beyond just data-driven accurate predictors. In our opinion, an ideal model shall at least: (1) be robust to various perturbations, distribution shifts, and adversarial attacks; (2) be efficient for both inference and training, including resource-efficient, data-efficient, and label-efficient; and (3) be designed to respect individual privacy and fairness.

[B] As Toolkits -- AutoML, and and Emerging New Models

We are enthusiastic about AutoML, on both consolidating its theoretical underpinnings and broadening its practical applicability. State-of-the-art ML systems consist of complex pipelines, with various design choices. We consider AutoML to be a powerful tool and a central hub, in addressing those design challenges faster and better. Our recent work focuses on the data-driven discovery of model architectures (i.e., neural architecture search) and training algorithms (a.k.a. “learning to optimize”). We are meanwhile devoted to studying uprising ML models that show to be potentially “universal” workhorses, such as transformers and graph neural networks.

[C] As Applications -- Computer Vision and Interdisciplinary Problems

We are interested in a broad range of computer vision problems, ranging from low-level (e.g, image reconstruction, enhancement, and generation) to high-level topics (e.g., recognition, segmentation, and vision for UAV/autonomous driving). We are also growingly interested in several interdisciplinary fields, such as biomedical informatics, geoscience, IoT and cyber-physical systems.

Prospective students