University College London (2023年QS排名全球第8）
Trustworthy AI for Systems Security
No day goes by without reading machine learning (ML) success stories across various application areas. Systems security is no exception, where ML's tantalizing performance leave one to wonder whether there are any unsolved problems left. However, machine learning has no real clairvoyant abilities and once the magic wears off, we're left in uncharted territory. Is machine learning truly capable of ensuring systems security? In this talk, we will highlight the importance of reasoning beyond mere performance by examining the consequences of adversarial attacks and distribution shifts in realistic settings. When relevant, we will also delve into behind-the-scenes aspects to encourage reflection on the reproducibility crisis. Our goal is to foster a deeper understanding of machine learning's role in systems security and its potential for future advancements.
Lorenzo Cavallaro is a Full Professor of Computer Science at University College London (UCL), where he leads the Systems Security Research Lab. He grew up on pizza, spaghetti, and Phrack, and soon developed a passion for underground and academic research. Lorenzo's research vision is to enhance the effectiveness of machine learning for systems security in adversarial settings. He works with his team to investigate the interplay between program analysis abstractions, representations, and ML models, and their crucial role in creating Trustworthy AI for Systems Security. Lorenzo publishes at and sits in the Program Committees of leading security conferences, received the Distinguished Paper Awards at USENIX Security 2022, was co-chair of the Deep Learning and Security workshop (2021-22) and DIMVA (2020-21) and he is Associate Editors for ACM TOPS and Computer & Security. In addition to his love for food, Lorenzo finds his Flow in science, music, and family.
原文始发于微信公众号（浙大网安）：学术报告|Trustworthy AI for Systems Security