By Bjørn Aslak Juliussen
On May 7th the EU legislators reached an agreement on changes to Regulation 2024/1689 (The AI Act). The AI Act omnibus deal marked the end of a six-month negotiation process. This negotiation process is part of a simplification effort in the aftermath of the Draghi Report on European global competitiveness. Among other things, the omnibus deal on the AI Act includes changes to the entry into force of the requirements of high-risk AI systems from 2026 to 2027 and 2028. Moreover, the deal has changed the possibility of using special categories of personal data under the GDPR for bias detection and mitigation in AI systems, and altered the AI literacy obligation for AI system providers and deployers.
In one regard, the AI Act omnibus deal provides new and more stringent rules on AI systems. Both AI systems functioning as nudifier tools and systems producing child sexual abuse material (CSAM) is included as prohibited AI systems in the AI Act.
AI system development and deployment is characterized by a fast-evolving technology used in societal areas with potential disruptive effects. With the agreement of the AI Act omnibus deal, the regulation of AI systems could also be described as a moving target. Enacted EU regulations are rarely changed before they enter into force, but with this omnibus deal we have seen a recent example of such a change. When both the object under regulation is evolving fast and the regulations themselves are subject to changes, a pertinent question evolves: How could AI system providers and deployers of high-risk AI systems under the EU acquis keep up?
In the recently published book Compliance By Design in AI Systems: Shifting Compliance in AI System Development and Deployment Left I argue that to comply with regulations when developing and deploying a rapidly evolving and potentially self-learning technology, regulatory compliance should be viewed as a part of the development process itself. The book examines the research question of whether it is possible to comply with the GDPR, the AI Act and relevant cyber security regulations concurrently when developing and deploying high-risk AI systems. When examining this research question, the book analyses the relevant regulations in three different components: The data component, the model component, and the AI-as-a-Service (AIaaS) component.
The data components addresses specific data protection, cybersecurity, and AI Act issues when collecting training data and applying the training data to train an AI model.
The model component focuses on the same three legal topics when applying a trained AI model to draw inferences in a deployed setting.
The last component focuses on legal constraints when an AI model is applied in a commonly used business model known as AIaaS, where an AI model is rented from a platform provider, running it on a cloud computing platform. Key legal issues in such a setting include controllership status under the GDPR and AI governance issues with AI infrastructure and data sharing.
‘Bjørn Aslak Juliussen’s Compliance by Design in AI Systems is a sophisticated and practical framework for embedding GDPR, AI Act, and cybersecurity compliance directly into AI development. By structuring analysis across data, model, and AIaaS phases and introducing “proportionality by design,” it provides a rigorous yet actionable framework for trustworthy AI.’
– Andrej Savin, Copenhagen Business School, Denmark
All three components of the book, to some extent, also focus on whether and how identified legal aspects could be implemented in a methodology applied for AI development and lifecycle management known as Machine Learning Operations (MLOps). The analyses in the book are, thus, not solely legal, but the book also examines whether it is possible to implement the identified legal requirements in a methodology used to develop and deploy AI systems.
However, as evident from the recent EU AI Act omnibus deal, the AI landscape is also characterized by changing regulations. The GDPR, another central regulation behind the legal analyses in the book, also needs to be interpreted in an evolving legal landscape with new guidance from the EDPB and in line with new judgments from the CJEU. In order to attempt to describe a compliance strategy that does not need to be revised quarterly, the legal analysis in the book is done through a fundamental rights lens grounded in seminal judgments from the CJEU and the ECtHR regarding the fundamental right to data protection and the right to a private life.
The book also develops a concept named “proportionality by design” as a methodological framework for the development and deployment of AI systems that process personal data. The book describes the concept as an integrated approach where both the means (development and use of the AI system) and the ends (intended purposes or inferences made by the AI model) are continuously evaluated in terms of proportionality, aligning with the risk-based frameworks of the GDPR and the AI Act. The book positions proportionality by design as a multifunctional tool, serving as an explanatory taxonomy, an interpretative framework, and a foundation for best practices in AI governance. It further distinguishes between negative and positive obligations, applying respectively to public authorities and private organisations. Finally, the chapter examines the application of constitutional proportionality review in AI regulation.
In a world where the key actors intend to “move fast and break things”, some societal values should be viewed as unbreakable. AI systems development and deployment should promote human dignity and fundamental rights and not be used as a data hungry surveillance tool. The book argues that moving central legal analyses left in the development process could be a method to promote such fundamental rights while ensuring that technology functions.
Bjørn Aslak Juliussen is a member of the Faculty of Law at UiT The Arctic University of Norway

Compliance by Design in AI Systems is available in hardback and eBook here.





Leave a Reply