Topics and Papers for in-class Presentation
Below is the list of topics we are interested to discuss. Under each topic, some related papers are listed for you to consider. You are not limited to choose only from the below list. Pick a topic and search for recent papers in conferences such as NeurIPS, ICML, AAAI, EMNLP , ACL, CVPR or journals such as TPAMI, TCYB, TNNLS, JMLR. If you wish to choose other topics, contact me as soon as possible to see if it fits into the scope of the course. The deadline
to indicate your topic of interest and submit your prefered papers is September 17, 2019.
Few-shot Learning, One-shot Learnig and Zero-shot Learning
- Yoon, Sung Whan, Jun Seo, and Jaekyun Moon. "TapNet: Neural Network Augmented with Task-Adaptive Projection for Few-Shot Learning." ICML (2019).
- Li, Huaiyu, et al. "LGM-Net: Learning to Generate Matching Networks for Few-Shot Learning." ICML (2019).
- Koch, Gregory, Richard Zemel, and Ruslan Salakhutdinov. "Siamese neural networks for one-shot image recognition." ICML deep learning workshop. Vol. 2. 2015.
- Florian Schroff, Dmitry Kalenichenko, James Philbin, "FaceNet: A Unified Embedding for Face Recognition and Clustering", CVPR 2015
- Vinyals, O., Blundell, C., Lillicrap, T., Kavukcuoglu, K., & Wierstra, D. (2016). Matching Networks for One Shot Learning. NIPS 2016.
- F. Sung, Y. Yang, and L. Zhang, “Learning to Compare : Relation Network for Few-Shot Learning” CVPR 2018.
- J. Snell, K. Swersky, and R. Zemel, “Prototypical networks for few-shot learning,” in NIPS, 2017.
- Ravi, S., & Larochelle, H. (2017). Optimization as a Model for Few-Shot Learning. In ICLR 2017
- Zhang, Fei, and Guangming Shi. "Co-Representation Network for Generalized Zero-Shot Learning." International Conference on Machine Learning. 2019.
- Wang, Wei, et al. "A survey of zero-shot learning: Settings, methods, and applications." ACM Transactions on Intelligent Systems and Technology (TIST) 10.2 (2019): 13.
- C. H. Lampert, H. Nickisch, and S. Harmeling, “Attribute-Based Classification for Zero-Shot Visual Object Categorization”
- Xian, Y., Schiele, B., Akata, Z., Campus, S. I., & Machine, A. (2017). Zero-Shot Learning - The Good, the Bad and the Ugly. In CVPR 2017.
- Y. Xian, Z. Akata, G. Sharma, Q. Nguyen, M. Hein, and B. Schiele, “Latent embeddings for zero-shot classification,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, pp. 69–77, 2016.
- Z. Zhang and V. Saligrama, “Zero-shot learning via joint latent similarity embedding,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016, vol. 2016-Decem, pp. 6034–6042.
- W.-Y. Chen, Y.-C. Liu, Z. Kira, Y.-C. F. Wang, and J.-B. Huang, “A Closer Look at Few-shot Classification,” ICLR, 2019.
- Y. Fu, T. M. Hospedales, T. Xiang, and S. Gong, “Transductive Multi-View Zero-Shot Learning,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 37, no. 11, pp. 2332–2345, 2015.
- E. Kodirov, T. Xiang, and S. Gong, “Semantic Autoencoder for Zero-Shot Learning,” in CVPR, 2017.
- E. Kodirov, T. Xiang, Z. Fu, and S. Gong, “Unsupervised Domain Adaptation for Zero-Shot Learning,” in ICCV, 2017.
- C. H. Lampert, H. Nickisch, and S. Harmeling, “Attribute-based classification for zero-shot visual object categorizationa,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 36, no. 3, pp. 453–465, 2014.
Multiview Clustering, Deep Embeddings for Clustering, Fair Clustering
- Lin, Wen-Yan, Siying Liu, Jian-Huang Lai, and Yasuyuki Matsushita. "Dimensionality's Blessing: Clustering Images by Underlying Distribution." CVPR, 2018.
- Xie, Junyuan, Ross Girshick, and Ali Farhadi. "Unsupervised deep embedding for clustering analysis." International conference on machine learning. 2016.
- Yang, Bo, et al. “Towards k-means-friendly spaces: Simultaneous deep learning and clustering.” Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017.
- Min, Erxue, et al. “A survey of clustering with deep learning: From the perspective of network architecture.” IEEE Access, 2018
- Aljalbout, Elie, et al. “Clustering with deep learning: Taxonomy and new methods.” arXiv preprint arXiv:1801.07648, 2018
- Shaham, Uri, et al. "Spectralnet: Spectral clustering using deep neural networks." ICLR, 2018
- Y. Yang and H. Wang, “Multi-view clustering: A survey,” Big Data Min. Anal., vol. 1, no. 2, pp. 83–107, 2018.
- KumarA,DaumH, "A co-training approach formulti-viewspectral clustering", Proceedings of the 28th international conference on machine learning. ACM, 2011
- Gao H, Nie F, Li X, Huang H, "Multi-view subspace clustering" IEEE international conference on computer vision, 2015.
- Backurs, Arturs, et al. "Scalable fair clustering." ICML (2019).
Multiple Instance Learning
- Carbonneau, Marc-André, et al. "Multiple instance learning: A survey of problem characteristics and applications." Pattern Recognition 77 (2018): 329-353.
Transfer Learning
- Zamir, Sax, Shen, Guibas, Malik, Savarese, “Taskonomy: Disentangling Task Transfer Learning”, CVPR 2018.
- A. Asgarian and A. Sibilia, “A Hybrid Instance-based Transfer Learning Method,” 2018.
- B. Fernando, et.al., "Unsupervised Visual Domain Adaptation Using Subspace Alignment", ICCV 2013.
- J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, “How transferable are features in deep neural networks?,” in NIPS, 2014, pp. 1–9.
- S. Kornblith, J. Shlens, and Q. V. Le “Do Better ImageNet Models Transfer Better?”, 2018
- Z. Li and D. Hoiem, “Learning without Forgetting”, 2016.
- Rusu, A. A., Vecerik, M., Rothörl, T., Heess, N., Pascanu, R., & Hadsell, R. "Sim-to-Real Robot Learning from Pixels with Progressive Nets", 2016.
Multi-task Learning
- Bingel, J., Søgaard, A., Identifying beneficial task relations for multi-task learning in deep neural networks. In EACL, 2017.
- Long, M., Wang, J., "Learning Multiple Tasks with Deep Relationship Networks". arXiv Preprint arXiv:1506.02117, 2015.
- Yang, Y., Hospedales, T. "Deep Multi-task Representation Learning: A Tensor Factorisation Approach", ICLR 2017.
Lifelong Learning
- Rusu, A. A., Rabinowitz, N. C., Desjardins, G., Soyer, H., Kirkpatrick, J., Kavukcuoglu, K., ... Deepmind, G. (2016). Progressive Neural Network.
- Kaiser, Ł., Nachum, O., Roy, A., Bengio, S. "Learning to Remember Rare Events", ICLR 2017.
Multi-view Learning
- S. Sun, L. Mao, Z. Dong, and L. Wu, Multiview Machine Learning. Springer, 2019.
- Zhao, Jing, et al. "Multi-view learning overview: Recent progress and new challenges." Information Fusion 38 (2017): 43-54.
- S. Sun, L. Mao, Z. Dong, and L. Wu, Multiview Machine Learning. Springer, 2019, (See Chapter 2).
- Blum A,Mitchell T, "Combining labeled and unlabeled data with co-training" ACM, 1998.
- Sindhwani V, Niyogi P, BelkinM, "A co-regularization approach to semi-supervised learning with multiple views", ICML, 2005.
- S. Sun, L. Mao, Z. Dong, and L. Wu, Multiview Machine Learning. Springer, 2019, (See Chapter 3).
- Hardoon DR, Szedmak SR, Shawe-taylor JR, "Canonical correlation analysis: an overview with application to learning methods", Neural Comput, 2004.
- Kan, Meina, et al. "Multi-view discriminant analysis." IEEE transactions on pattern analysis and machine intelligence, 2015.
- D. Yi, Z. Lei, and S. Z. Li, “Shared representation learning for heterogenous face recognition,” 2015 11th IEEE Int. Conf. Work. Autom. Face Gesture Recognition, FG 2015, vol. 1, pp. 1–7, 2015.
- J. Hu, J. Lu, S. Member, and Y.-P. Tan, “Sharable and Individual Multi-View Metric Learning,” TPAMI, 2018.
Interpretability of ML
- Zachary C. Lipton, “The mythos of model interpretability” arXiv preprint arXiv:1606.03490, 2017
- Finale Doshi-Velez and Been Kim, "Towards A Rigorous Science of Interpretable Machine Learning", ICML 2017.
- Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin, “Why Should I Trust You?: Explaining the Predictions of Any Classifier”, KDD 2016
- Scott Lundberg, Su-In Lee, “A Unified Approach to Interpreting Model Predictions” , NIPS 2017
- Marco Ribeiro, Sameer Singh, Carlos Guestrin, “Anchors: High-Precision Model-Agnostic Explanations”, AAAi 2018
- Kim, Been, Rajiv Khanna, and Oluwasanmi O. Koyejo, “Examples are not enough, learn to criticize! criticism for interpretability”, 2016.
- P. W. Koh and P. Liang, “Understanding Black-box Predictions via Influence Functions,” best paper award ICML 2017
- A. Dhurandhar et al. “Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives”, 2018
- Alex Goldstein, Adam Kapelner, Justin Bleich, Emil Pitkin (2014), “Peeking Inside the Black Box: Visualizing Statistical Learning with Plots of Individual Conditional Expectation”.
- Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, Rory sayres , “Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)”, ICML 2018.
- J. Adebayo et al., “Sanity Checks for Saliency Maps,” NIPS, 2018. Spot light
- I. Y. Chen, F. D. Johansson, and D. Sontag, “Why Is My Classifier Discriminatory?,” spot light, in NIPS, 2018.
- I. Lage, A. Slavin Ross, B. Kim Google Brain, S. J. Gershman, and F. Doshi-Velez, “Human-in-the-Loop Interpretability Prior,” spot light in NIPS, 2018.
- G. Plumb, D. Molitor, and A. Talwalkar, (2018), “Supervised Local Modeling for Interpretability” Nips 2018
- T. Laugel, M. J. Lesot, C. Marsala, X. Renard, and M. Detyniecki, (2018), “Inverse Classification for Comparison-based inverse classification for interpretability in machine learning,”.
- Aaron Fisher, Cynthia Rudin, Francesca Dominici, (2018)“All Models are Wrong but many are Useful: Variable Importance for Black-Box, Proprietary, or Misspecified Prediction Models, using Model Class Reliance”.
- S. Joshi, O. Koyejo, B. Kim, and J. Ghosh,(2018) “xGEMs: Generating Examplars to Explain Black-Box Models,”.
- S Wachter, B Mittelstadt, C Russell (2017), “Counterfactual explanations without opening the black box: Automated decisions and the GDPR”.
- X. Zhang, A. Solar-Lezama, and R. Singh, (2018), “Interpreting Neural Network Judgments via Minimal, Stable, and Symbolic Corrections”.
- SA Friedler, C Scheidegger, (2016), “On the (im) possibility of fairness”.