墨尔本大学马兴军助理讲师学术报告

发布时间:2019-12-20 点击:

  报告题目:Adversarial Machine Learning: an intruduction and tutorial

  报告摘要:Deep learning has become increasingly popular in the past few years. This is largely attributed to a family of powerful models called deep neural networks (DNNs). With many stacked layers, and millions of neurons, DNNs are capable of learning complex non-linear mappings, and have demonstrated near or even surpassing human-level performance in a wide range of applications such as image classification, object detection, natural language processing, speech recognition self-driving cars,playing games or medical diagnosis. Despite their great success, DNNs have recently been found vulnerable to adversarial examples (or attacks), which are input instances slightly modified in a way that is intended to fool the model. Such a surprising weakness of DNNs has raised security and reliability concerns on the development of deep learning systems in safety-critical scenarios such as face recognition, autonomous driving and medical diagnosis. Since the first discovery, this has attracted a huge volume of work on either attacking or defending DNNs against these attacks. In this tutorial, we will introduce this adversarial phenomenon, explanations to this phenomenon, and techniques that have been developed for both attack and defense.

  报告人简介:马兴军,优秀吉大校友。2010年吉大软件学院本科,2015年清华软件学院硕士,2019年澳大利亚墨尔本大学计算机系博士,2019年至今墨尔本大学助理讲师。从事机器学习、深度学习相关研究,重点研究深度学习中的安全问题:Adversarial Machine Learning. 先后在顶级会议ICML/ICLR/CVPR/ICCV/AAAI/IJCAI发表论文10余篇,其中多篇被选为口头报告(Oral)论文,如ICLR2018, ICML2018/2019等。个人主页:http://xingjunma.com/。

  报告时间:2019年12月24日 星期二 13:30pm

  报告地点:人工智能学院(中心校区行政楼601)

  主办单位:吉林大学人工智能学院