SHASAI Project Strengthens AI Security Across the Lifecycle
The EU-funded SHASAI project aims to protect AI systems from cybersecurity threats from design to real-world use. Supported by Horizon Europe, SHASAI focuses on improving the security, resilience, and trustworthiness of AI technologies as cyber risks grow.
“SHASAI tackles AI cybersecurity as a full lifecycle challenge, not a set of disconnected solutions,” said Leticia Montalvillo Mendizabal, Cybersecurity Researcher at IKERLAN and Project Coordinator. “By combining secure hardware and software, risk-driven engineering, and real-world validation, we help organisations deploy AI systems that are innovative, resilient, and compliant with European rules.”
Real-World Scenarios for Testing AI Security
AI systems are increasingly complex and interconnected, making them vulnerable to cyber-attacks and failures with real-world consequences.
SHASAI will validate its methods in three practical scenarios:
- Agrifood sector: AI-enabled cutting machines
- Healthcare: Eye-tracking systems for assistive technologies
- Mobility: Tele-operated last-mile delivery vehicles
These cases allow researchers to test solutions across different sectors and ensure that methods can be applied to other AI applications.
Building a Robust and Trustworthy Security Architecture
The project aims to develop adaptive and reliable AI security solutions. The goal is to ensure AI systems remain resilient, traceable, and compliant with evolving cybersecurity standards, even in high-risk environments.
Supporting Europe’s Trustworthy AI Goals
SHASAI translates cybersecurity and AI safety principles into practical technical solutions. It aligns with major EU initiatives, including:
- The EU AI Act
- The Cyber Resilience Act (CRA)
- The NIS2 Directive
- The EU Cybersecurity Strategy
The consortium brings together universities, research organisations, industry, and technology providers. SHASAI began on 1 November 2025 and will run until April 2029.
