logo

End-to-End Conceptual Guarding of Neural Architectures

News
Partners
Positions
Publications
People
Applications
Third Party Contributions
Index of Benchmarks
Awards

Cordeiro

Lucas Cordeiro is a Reader in the Department of Computer Science at the University of Manchester (UoM), where he leads the Systems and Software Security (S3) Research Group. Dr. Cordeiro is the Arm Centre of Excellence Director at UoM; he also leads the Trusted Digital Systems cluster at the Centre for Digital Trust and Society. He is also affiliated with the Formal Methods Group at UoM. Dr. Cordeiro has implemented various tools used to verify safety and security properties in significant industrial programs written in Java, C/C++, and CUDA. Among those are ESBMC, an SMT-based model checker for C/C++/Qt and CUDA, and JBMC, a SAT-based model checker for Java bytecode. In the last ten years, he also won 28 awards from international software verification and testing competitions held as part of ETAPS at TACAS 2012-2021 and FASE 2020-2021. Dr. Cordeiro's industrial research collaborators include Samsung, Nokia, Motorola, TP Vision, Intel, and ARM. His tools have been applied to find real security vulnerabilities in large-scale software systems. He has a proven track record of securing research funding from Samsung, Nokia Institute of Technology, Motorola, CAPES, CNPq, FAPEAM, British Council, EPSRC, and Royal Society. He leads one large EPSRC project concerning verifiable and explainable secure AI. He is Co-I on three others about software security and automated reasoning, with a portfolio of approximately 5.4m GBP.

Freitas

André Freitas is a Senior Lecturer in the Department of Computer Science at the University of Manchester, where he leads the AI Systems Lab. Dr. Freitas works at the interface between latent and explicit semantic representation and inference. His works are the state-of-the-art in contextual Open Information / Knowledge Graph Extraction, sentence splitting. Dr. Freitas has pioneered explainable reasoning approaches, which are also the state-of-the-art in the field. Dr. Freitas introduced explainability into Text Entailment, the use of hybrid distributional-symbolic representations for Natural Language Inference and semantic parsing over large and heterogeneous knowledge graphs. Dr. Freitas worked as a project coordinator (InnovateUK IAA BP, RS Explainable QA) and technical leader (H2020 SSIX, H2020 MARIO, FP7 SIFEM) in NLP projects.

Mustafa

Mustafa A. Mustafa is a Dame Kathleen Ollerenshaw Research Fellow in the Department of Computer Science, University of Manchester. He is a Co-I of a multi-disciplinary FWO-SBO project on secure and privacy-friendly energy trading, the SNIPPET project (S007619N), where he leads the work package for designing algorithms for secure and verifiable biding, billing and market/user/grid behavior modeling based on AI/ML algorithms. Over the past few years, Dr. Mustafa has worked on several multi-disciplinary projects, including Smart Grid Security (EPSRC and Toshiba TREL), SAGA (InnoEnergy) and DiskMan (imec). His work won awards and recognition, including Dame Kathleen Ollerenshaw Fellowship (University of Manchester, 2017), Best Student Video Award (IEEE SmartGridComm 2017), Best Paper Award (SECURWARE 2017) and shortlisted for Distinguished Achievement Award as Postgraduate Research Student of the Year (University of Manchester, 2015). He currently acts as an expert in the IEC/SYC/WG3 EC Smart Energy Roadmap.

Huang

Xiaowei Huang is an Associate Professor in the Department of Computer Science at the University of Liverpool. Dr Huang is an international lead on the verification and testing of neural networks. His paper on safety verification has attracted 240+ citations. The new research direction started from this paper has attracted significant attention, e.g., deep learning expert Dr Ian Goodfellow wrote a blog post dedicated to discuss this direction and the paper. Additional to the verification approaches, Dr Huang is also leading on the testing of deep neural networks. His team is the first to propose the MC/DC variants of test coverage metrics for neural networks and a concolic (i.e., combination of concrete and symbolic execution) testing approach for neural networks. Dr Huang is the PI of two Dstl projects on Test Metrics for AI worth £426k, and is a co-I of one of the UK hubs of robotics and AI for extreme environments on Offshore Robotics for Certification of Asset, leading the research on the certification of deep learning.

Brown

Gavin Brown is a Professor of Machine Learning and Director of Research for the Department of Computer Science, University of Manchester. He leads two large RCUK/industry projects, is Co-I on three others, with a portfolio of approximately 2.5m GBP. His work has won awards, including the British Computer Society Distinguished Dissertation Award (2004, 2013), ECML best paper 2014, ACM Notable Paper of 2016. His research spans interdisciplinary projects on drug efficacy predictions for AstraZeneca, and Greater Manchester Police on modeling domestic violence.

Lujan

Mikel Lujan is the Arm/Royal Academy of Engineering Research Chair in Computer Systems, Royal Society Wolfson Fellow and the director of the Arm Centre of Excellence at the University of Manchester. Until Sep 2017, he held the Royal Society University Research Fellowship investigating low power manycore systems. Since his first paper in 2000, he has authored more than 130 refereed papers (including best papers in 2017 & 2008 and runner ups in 2012/2014) and has amassed a rich research experience across the computing stack; from parallel programming to computer architectures, passing by machine learning and FPGAs. In the last five years, he has published in HPCA, PLDI (distinguished paper award), FCCM, ICRA, VEE, and ISPASS (best paper award). He is well-known for his contributions in dynamic binary modification and translation for Arm. He leads a team of 9 PDRAs and 12 PhD students funded by two EPSRC projects, 4 H2020 projects, Oracle and Arm.

Manino

Edoardo Manino (PDRA) is a Research Associate in the Department of Computer Science at the University of Manchester. He is part of the EnnCore project and focuses on automated verification of neural network architectures. His background is in Bayesian machine learning, a topic he recently got awarded a PhD from the University of Southampton. His other research interests range from network science to algorithmic game theory and reinforcement learning.

Manino

Danilo Carvalho (PDRA) is a Research Associate in the Department of Computer Science at the University of Manchester. He is part of the EnnCore project, focusing on safe and explainable AI architectures. His background is in Natural Language Processing / Computational Linguistics, holding a PhD in Information Science from the Japan Advanced Institute of Science and Technology (JAIST). He has previous industry experience, having worked as a systems analyst within Brazilian state oil company (Petrobras) on job safety analysis (JSA) and environmental licensing control systems. His other research interests include Parallel and Distributed Computing and Software Engineering.

Yi

Yi Dong (PDRA) is a Research Associate in the Department of Computer Science at the University of Liverpool. He works on End-to-End Conceptual Guarding of Neural Architectures, Safety Arguments for Learning-enabled Autonomous Underwater Vehicles, and Foundations for Continuous Engineering of Trustworthy Autonomy. His research interests include Deep Reinforcement Learning, Probabilistic Verification, Explainable AI, Distributed Optimisation.

Rozanova

Julia Rozanova is a part-time Research Associate and a PhD student in the Department of Computater Science at the University of Manchester. She works on the interpretability, testing and design of neural models in Natural Language Processing. Her research interests include Natural Language Understanding, Computational Semantics, Natural Logic and Model Interpretability Methods.