Workshop on General-Purpose AI: Prospects and Risks



This is a third workshop organised by the project EnnCore's team. The first workshop was with SafeAI200 workshop and the AAAI2022, with video recording available. The second workshop was held at Manchester in July 2023 (https://enncore.github.io/events/secaiws/). EnnCore: End-to-End Conceptual Guarding of Neural Architectures is a research project funded by the EPSRC under the call "Security for all in an AI enabled society", together with several other projects: CHAI: Cyber Hygiene in AI enabled domestic life, SAIS: Secure AI assistantS, and AISEC: AI Secure and Explainable by Construction. These projects focus on challenges at the intersection between Artificial Intelligence (AI) and Cyber-Security, including security for AI and AI for security, aiming for better and more widespread adoption of trusted and secure AI systems.

In this workshop, we will be specifically interested in the General-purpose AI (GPAI) models, sometimes called foundation models. GPAI models have a wide range of applications, from chatbot, to code generation, to scientific discovery. However, their vulnerabilities were also discovered and widely concerned. The project EnnCore, sponsored by the EPSRC to work on the verification and explanation of neural architectures, is coming to the end. In its lifespan of 5 years, the EnnCore team has been closely monitoring the progress of the AI, from the days where convolutional neural networks are dominantly studied, to today where a significant proportion of research has been shifted to study the GPAI. One question remains across these models, that is, can the risks of machine learning models be sufficiently quantified? A mathematically proved quantification of risks can provide not only the trust to the AI models but also the evidence to the regulators about their good characteristics. This workshop invites experts in the field to share their research progresses, thoughts, as well as the insights to this grand challenge of GPAI.

Participants will leave having a better idea what the risks are, how to safeguard AI models against the risks, and how to certify the certain to which an AI model is free from the risks. Therefore, we will foster the dialogue between contemporary neural models and full-stack neural software verification.

The workshop will be held at Liverpool, where the University of Liverpool is the second site of the EnnCORE project. The workshop will have invited talks, poster presentations, and a roundtable discussion regarding Safety of GPAI.

The event will occur in the afternoon of Monday June 9th, 2025 in Rendall Building (Seminar Room 4), Liverpool L69 3BX.

Participants can find information about parking their cars at visitor car parking.

Program

Timing Talk
12:30 - 13:00 Arrival, sandwiches, and coffee
13:00 - 13:10 Welcome and introduction
13:10 - 13.30 Inverting Large Language Models, by Adrians Skapars, Edoardo Manino and Lucas Cordeiro (Manchester)
13:30 - 13.50 Robustness certification for deep learning, by Yi Dong and Xiaowei Huang (Liverpool)
13:50 - 14:10 Defending the Edge: Representative-Attention for Mitigating Backdoor Attacks in Federated Learning, by Chibueze Peace Obioma and Mustafa A. Mustafa (Manchester)
14:10 - 14:30 Thoughts from the NCSC on the security of AI systems, Martin R3 (NCSC)
14:30 - 14:45 Coffee break
14:45 - 15:05 Formal Verification of Python and NumPy Programs Using ESBMC, by Bruno Farias and Lucas Cordeiro
15:05 - 15:25 Enhancing Cyber Hygiene for Large Language Models, by Aksshar Ramesh (Sister Project CHAI)
15:25 - 15:45 NLP Verification: Towards Verified Safeguards for LLMs, by Luca Arnaboldi and Ekaterina Komendantskaya (Sister Project AISEC)
15:45 - 16:00 Coffee break
16:00 - 16:20 Certified Guidance for Planning with Deep Generative Models, by Mehran Hosseini and Nicola Paoletti
16:20 - 16:40 Socio-technical aspects of AI-powered deepfake fraud, by Meropi Tzanetakis (Manchester)
16:40 - 17:00 Explainable Deepfake Detection: Leveraging Vision-Language Models for Multimedia Authentication, by Guangliang Cheng (Liverpool)
18:00 - 20:00 Dinner

Register

This event is by invitation only and is intended to establish new partnerships/collaborations to lead the discussion concerning the challenges and opportunities and tackle our main obstacles to achieving safe and trustable GPAI systems.

If you would like to attend, please contact our organisation committee: