We are organizing a day for bringing together researchers working on the projects funded by the EPSRC call "Security for all in an AI enabled society", which include: CHAI: Cyber Hygiene in AI enabled domestic life, SAIS: Secure AI assistantS, AISEC: AI Secure and Explainable by Construction, and EnnCore: End-to-End Conceptual Guarding of Neural Architectures. These projects focus on challenges at the intersection between Artificial Intelligence (AI) and Cyber-Security, including security for AI and AI for security, aiming for better and more widespread adoption of trusted and secure AI systems.
We are particularly interested in discussing recent achievements and future initiatives to address the fundamental security problem of guaranteeing safety, transparency, and robustness in neural-based architectures. We will have invited talks, poster presentations, and a roundtable discussion regarding SafeAI.
Participants will leave having a better understanding of enabling system designers to specify essential conceptual/behavioral properties of neural-based systems, verify them, and thus safeguard the system against unpredictable behavior and attacks. Therefore, we will foster the dialogue between contemporary explainable neural models and full-stack neural software verification.
The event will occur on July 4th in the Department of Computer Science at the University of Manchester, located in the Kilburn Building, Oxford Rd, Manchester M13 9PL. The talks will occur in LT1.4, the poster session in Atlas 1, and the lunch and drinks in the Mercury room. These venues are all located on the first floor of the Kilburn building.
Participants can find information about parking their cars at Car Parking at the University. Car Park B: Aquatics Car Park, Manchester M13 9SS, is closest to the Kilburn building.
We recorded the talks of the EPSRC projects here.
|12:30 - 13:00||Arrival, sandwiches, and coffee (Mercury room)|
|13:00 - 13:30||Welcome and introduction by Lucas Cordeiro, Stephanie Williams (EPSRC UKRI), Mustafa Mustafa (LT 1.4) [pdf]|
|13:30 - 14:30||Evaluating Privacy in Machine Learning (by Andrew Paverd) (LT1.4) [pdf]|
|14:30 - 14:45||Coffee break (Mercury room)|
|14:45 - 15:15||A Tale of Two Oracles: Defining and Verifying when AI Systems are Safe (by Edoardo Manino) (LT1.4) [pdf]|
|15:40 - 15:55||Coffee break (Mercury room)|
|15:15 - 15:45||One Picture Paints a Thousand Words: Using Abstract Interp. for NLP Verification (by Marco Casadio) (LT1.4) [pdf]|
|15:45 - 16:00||Coffee break (Mercury room)|
|16:00 - 16:30||Efficiently Training Neural Networks for Verifiability (by Alessandro De Palma) (LT1.4) [pdf]|
|16:30 - 17:00||Cyber Hygiene in AI-enabled domestic life (by George Loukas) (LT1.4) [pdf]|
|17:00 - 18:00||Drinks reception (Mercury room)|
|18:00 - 18:30||Free time|
|18:45 - 21:00||Dinner at Bem Brasil Deansgate (44 King St W, Manchester M3 2GQ)|
This event is by invitation only and is intended to establish new partnerships/collaborations to lead the discussion concerning the challenges and opportunities and tackle our main obstacles to achieving explainable and fully verifiable learning-based systems.