The rapid progress of deep learning in recent years has led to significant advances in various fields such as computer vision, natural language processing, and speech recognition. The success of deep learning models heavily relies on the availability of large-scale and high-quality datasets. To address this challenge, active learning is a representative strategy that interactively queries human annotators for efficient data annotation. I will discuss how to design both theoretically and empirically effective active learning strategy for deep neural networks. On the other hand, powerful deep learning models have the potential to create high-quality data for human needs nowadays. I will demonstrate this by exploring recent advancements in generative methods. By taking the neural style transfer problem as an example, I will discuss how to achieve a desirable balance between content, style, and visual quality when creating visual contents. I will also share potential future directions of data-aspect AI, as well as the applications to biomedical domains.
Siyu Huang is a postdoctoral fellow at the John A. Paulson School of Engineering and Applied Sciences, Harvard University. He received his B.E. and Ph.D. degrees from Zhejiang University in 2014 and 2019, respectively. Prior to joining Harvard, he was a visiting scholar at Carnegie Mellon University in 2018, a research scientist at Baidu Research from 2019 to 2021, and a research fellow at Nanyang Technological University in 2021. His research interests include computer vision, deep learning, and generative AI.