应物理学院刘树田教授、周可雅副教授邀请,香港中文大学范凤磊博士将做深度学习理论、建模及其在智能制造和图像处理方面应用的学术报告,报告结束后范凤磊博士将分享他读博期间的学习经历和感悟,欢迎在读硕士和博士研究生参加并交流。
报告题目:破解及利用ReLu网络的学习机理 (In Pursuit of Deciphering ReLU Networks and Beyond)
报告时间:2023年7月7日,星期五,上午10:30-12:00
报告地点:理学楼717
报告人简介:范凤磊博士本科毕业于哈尔滨工业大学(2017年),博士毕业于美国伦斯勒理工学院(Rensselaer Polytechnic Institute),导师为国际知名影像专家王革教授, 随后在美国康乃尔大学从事为期一年的博士后研究。目前在香港中文大学担任研究助理教授。研究方向为深度学习理论、建模及其在智能制造和图像处理方面的应用。主要研究成果发表于人工智能领域和图像处理领域的旗舰杂志如JMLR, IEEE TNNLS, IEEE TMI, IEEE TCI, IEEE TAI, 代表作为基于二阶神经元的深度学习体系和神经网络宽度深度对称性。范凤磊博士曾在顶级会议AAAI2023组织tutorial,获得广泛关注和好评。范凤磊博士读博期间获得IBM AI Horizon Scholarship的资助,共计20W美金。其博士论文亦被国际神经网络协会(INNS)授予2021年博士论文奖(每年仅授予一名)。
报告英文摘要:Deep learning, represented by deep artificial neural networks, has dominated numerous important research fields in the past decade. Although deep learning performs excellently in many tasks, it is notoriously a black-box model. A neural network with the widely used ReLU activation is a piecewise linear function over polytopes with a simple functional structure. To enhance the interpretability of deep learning, it is crucial to figure out the properties of polytopes and decipher the functional structure of a ReLU network. In collaboration with leading peer groups from Harbin Institute of Technology, Cornell University, BIGAI, and RIKEN AIP, we dedicate ourselves to answering the following fundamental questions: what kind of functions does a ReLU network learn? And how can we leverage the enhanced understanding to build novel machine learning models? I will share with you our work and discuss issues and opportunities in the field. We welcome interaction with prospective students, postdoctoral fellows, and new collaborators.