Topics and Papers for in-class Presentation
Below is the list of topics we are interested to discuss. Under each topic, some related papers are listed for you to consider. You are not limited to choose only from the below list. Pick a topic and search for recent papers in conferences such as NeurIPS, ICML, AAAI, ICLR, EMNLP , ACL, CVPR or journals such as TPAMI, TCYB, TNNLS, JMLR. If you wish to choose other papers, contact me as soon as possible.
Zero-shot Learning
- C. H. Lampert, et al. “Attribute-based classification for zero-shot visual object categorization” T-PAMI, 2014.
- E. Kodirov, T. Xiang, Z. Fu, and S. Gong, “Unsupervised Domain Adaptation for Zero-Shot Learning,” in ICCV, 2017.
- Z. Zhang and V. Saligrama, “Zero-shot learning via joint latent similarity embedding,” CVPR, 2016.
- Y. Xian, et al. “Latent embeddings for zero-shot classification,” CVPR, 2016.
- E. Kodirov, et al. “Semantic Autoencoder for Zero-Shot Learning”, CVPR, 2017.
- Y Xian, et al. "Feature Generating Networks for Zero-Shot Learning", CVPR 2018.
- Zhang, Fei, and Guangming Shi. "Co-Representation Network for Generalized Zero-Shot Learning." ICML, 2019.
Few-shot Learning
- Koch, Gregory, Richard Zemel, and Ruslan Salakhutdinov. "Siamese neural networks for one-shot image recognition." ICML deep learning workshop. Vol. 2. 2015.
- Florian Schroff, Dmitry Kalenichenko, James Philbin, "FaceNet: A Unified Embedding for Face Recognition and Clustering", CVPR 2015
- Vinyals, O., Blundell, C., Lillicrap, T., Kavukcuoglu, K., & Wierstra, D. "Matching Networks for One Shot Learning", NIPS 2016.
- J. Snell, K. Swersky, and R. Zemel, “Prototypical networks for few-shot learning,” in NIPS, 2017.
- Chelsea Finn, et. al. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks, ICML 2017
- Yoon, Sung Whan, Jun Seo, and Jaekyun Moon. "TapNet: Neural Network Augmented with Task-Adaptive Projection for Few-Shot Learning." ICML (2019).
- Li, Huaiyu, et al. "LGM-Net: Learning to Generate Matching Networks for Few-Shot Learning." ICML (2019).
- F. Sung, Y. Yang, and L. Zhang, “Learning to Compare : Relation Network for Few-Shot Learning” CVPR 2018.
- Ravi, S., & Larochelle, H. (2017). Optimization as a Model for Few-Shot Learning. In ICLR 2017
GANs
- Isola, Phillip, et al. "Image-to-image translation with conditional adversarial networks." CVPR, 2017.
- Zhu, Jun-Yan, et al. "Unpaired image-to-image translation using cycle-consistent adversarial networks" ICCV, 2017.
- Tero Karras, et al. "Progressive Growing of GANs for Improved Quality, Stability, and Variation", ICLR, 2018.
- Karras, Tero, et al. "A style-based generator architecture for generative adversarial networks" CVPR 2019.
- Choi, Yunjey, et al. "Stargan: Unified generative adversarial networks for multi-domain image-to-image translation" CVPR 2018.
- Liu, Ming-Yu, et al. "Few-shot unsupervised image-to-image translation" CVPR 2019.
Deep Clustering, Multiview Clustering
- Xie, Junyuan, et al. "Unsupervised deep embedding for clustering analysis." ICML 2016.
- Yang, Bo, et al. “Towards k-means-friendly spaces: Simultaneous deep learning and clustering” ICML 2017.
- Lin, Wen-Yan, Siying Liu, Jian-Huang Lai, and Yasuyuki Matsushita. "Dimensionality's Blessing: Clustering Images by Underlying Distribution." CVPR, 2018.
- Shaham, Uri, et al. "Spectralnet: Spectral clustering using deep neural networks." ICLR, 2018
- KumarA,DaumH, "A co-training approach for multi-viewspectral clustering", ICML 2011
- Gao H, Nie F, Li X, Huang H, "Multi-view subspace clustering" ICCV 2015.
Transfer Learning and Domain Adaptation
- B. Fernando, et.al., "Unsupervised Visual Domain Adaptation Using Subspace Alignment", ICCV 2013.
- Judy Hoffman, et.al. "CyCADA: Cycle-Consistent Adversarial Domain Adaptation", ICML 2018
- Z. Li and D. Hoiem, Learning without Forgetting, 2016.
- Zamir, Sax, Shen, Guibas, Malik, Savarese, Taskonomy: Disentangling Task Transfer Learning, CVPR 2018.
Self-supervised Learning
- T Chen, et. al. "A Simple Framework for Contrastive Learning of Visual Representations", ICML 2020
- K. He, et. al. "Momentum Contrast for Unsupervised Visual Representation Learning", CVPR 2020
- J. Grill, et. al. "Bootstrap Your Own Latent A New Approach to Self-Supervised Learning", NeurIPS 2020
- M. Caron, et. al. "Unsupervised Learning of Visual Features by Contrasting Cluster Assignments", NeurIPS 2020
Interpretability of ML
- Marco Tulio Ribeiro, et. al. Why Should I Trust You?: Explaining the Predictions of Any Classifier, KDD 2016
- Oscar Li, et. al. Deep Learning for Case-Based Reasoning through Prototypes: A Neural Network that Explains Its Predictions, AAAI 2018
- C. Chen, et. al. "This Looks Like That: Deep Learning for Interpretable Image Recognition", NeurIPS 2019
- Scott Lundberg, Su-In Lee, A Unified Approach to Interpreting Model Predictions , NeurIPS 2017
- Marco Ribeiro, et. al. Anchors: High-Precision Model-Agnostic Explanations, AAAI 2018
- Been Kim, et. al. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV), ICML 2018.
- M. Nokhbeh Zaeem, et. al. Cause and Effect: Concept-based Explanation of Neural Networks, IEEE International Conference on Systems, Man, and Cybernetics, 2021.
- A. Dhurandhar et al. Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives, NeurIPS 2018
- I. Y. Chen, F. D. Johansson, and D. Sontag, Why Is My Classifier Discriminatory?, NeurIPS, 2018.
Transformers
- Kenton, Jacob, et al. "Bert: Pre-training of deep bidirectional transformers for language understanding." ACL 2019.
- Radford, Alec, et al. "Improving language understanding by generative pre-training." (2018).
- Hu, Edward J., et al. "Lora: Low-rank adaptation of large language models" ICLR 2022.
- Li, Xiang Lisa, and Percy Liang. "Prefix-tuning: Optimizing continuous prompts for generation." ACL, 2021.
- Kojima, Takeshi, et al. "Large language models are zero-shot reasoners." Neurips, 2022.
- A. Dosovitskiy, et al., "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale", 2020.