TLDR: A position paper argues that AI innovation must be coupled with robust regulation to prevent systematic violations of fundamental rights. Drawing on historical precedents and the EU AI Act, the authors contend that regulation is not a hindrance but a foundation for responsible AI, fostering trust, mitigating harms like bias and misinformation, and driving ethical competitiveness through mechanisms like regulatory sandboxes, impact assessments, and transparency.
Artificial intelligence (AI) has become an integral part of our daily lives, influencing everything from critical infrastructure to major decision-making systems. However, its rapid advancement has sparked a crucial debate: can AI truly be considered innovative if it systematically infringes upon fundamental human rights?
A recent position paper, titled “Position Paper: If Innovation in AI Systematically Violates Fundamental Rights, Is It Innovation at All?”, challenges the long-held belief that regulation and innovation are opposing forces. Authored by Josu A. Eguiluz, Axel Brando, Migle Laukyte, and Marc Serra-Vidal, the paper argues that thoughtful and adaptive regulation is not a barrier but rather the very foundation for sustainable innovation in AI. For a deeper dive into their arguments, you can read the full paper here.
Learning from History: Regulation as a Catalyst for Progress
The authors draw parallels from other high-risk sectors to illustrate their point. Industries like aviation and pharmaceuticals, which once faced high accident rates and devastating scandals (such as the thalidomide tragedy), saw dramatic improvements in safety and public trust only after the implementation of stringent regulations. These frameworks, far from stifling progress, fostered significant advancements and public confidence. Similarly, the paper highlights the dangers of unregulated AI through cases like the Dutch SyRI scandal, where an opaque AI-driven welfare system violated human rights due to bias and lack of transparency.
This historical perspective underscores what is known as Collingridge’s Dilemma: it’s difficult to foresee all risks in a technology’s early stages, but once it’s deeply embedded, meaningful alteration becomes incredibly costly or impossible. For AI, where consequences can scale rapidly, proactive governance is essential to prevent irreparable harm.
The Tangible Risks of Unregulated AI
The paper provides compelling evidence that the risks of deregulated AI are not hypothetical. We’ve already seen instances of large-scale disinformation, such as deepfake audio influencing elections, and the proliferation of synthetic non-consensual content. Beyond information harms, AI systems have been sanctioned for bias and unaccountable decision-making in critical areas like housing, credit, welfare, and education, disproportionately affecting vulnerable groups.
Bias in AI systems often stems from historical data that reflects existing societal inequalities, leading to the perpetuation and amplification of discrimination. Furthermore, a lack of clear accountability mechanisms in AI-driven decisions means individuals struggle to challenge or seek redress for harms, especially in high-stakes scenarios like medical diagnoses or criminal proceedings. The concept of “moral outsourcing,” where ethical responsibility is shifted from human decision-makers to AI systems, further exacerbates this problem.
The EU AI Act: A Model for Responsible Innovation
The paper presents the EU AI Act as a pioneering example of risk-based, responsibility-driven regulation. While some critics view its requirements as burdensome, the Act is designed to foster responsible AI innovation through several adaptive mechanisms:
- Regulatory Sandboxes: These controlled environments allow developers to train, test, and validate AI systems under regulatory supervision, providing legal certainty and mitigating risks without immediate fines for good-faith experimentation. Spain has already launched its first national sandbox, offering practical insights.
- Support for SMEs and Start-ups: Recognizing the potential compliance costs, the Act offers priority access to sandboxes, targeted training, dedicated communication channels, and simplified quality management systems for smaller businesses.
- Competitive Advantages: Far from being a hindrance, structured regulation can build consumer trust, drive compliance-driven innovation (like watermarking for AI-generated content), and establish ethical leadership in the global market. Companies that proactively align with ethical and legal standards can gain a significant competitive edge.
Operationalizing Responsible AI: Transparency, Impact Assessments, Accountability, and Literacy
The EU framework also emphasizes key governance tools that translate legal obligations into actionable practices:
- Transparency: Mandating structured documentation ensures that AI systems can be scrutinized, outcomes contested, and redress sought. This fosters trust by making AI decision-making processes more comprehensible.
- Impact Assessments: The Fundamental Rights Impact Assessment (FRIA) requires an ex-ante evaluation of potential interferences with fundamental rights, shifting governance from reactive compliance to anticipatory design.
- Accountability Mechanisms: Human oversight, traceability, and reversibility are crucial. The Act ensures that accountability is not just procedural but actionable, with provisions for complaints and whistleblower protections.
- AI Literacy: Beyond technical fluency, AI literacy equips individuals and institutions with the skills and understanding to deploy and scrutinize AI responsibly, turning transparency into understanding and accountability into institutional memory.
Also Read:
- Theological Perspectives Shape Emerging AI Governance Frameworks Amidst Global Challenges
- Guiding AI’s Moral Compass: A Five-Step Framework for Diverse Value Alignment
A New Definition of Progress
Ultimately, the paper concludes that true innovation in AI must be synonymous with responsible innovation. Progress should not come at the expense of human dignity or fundamental rights. The EU AI Act, with its comprehensive and adaptive framework, demonstrates how regulation can serve as a catalyst for technological ambition, ensuring that AI’s future is defined not just by its speed of invention, but by the integrity of its governance and its alignment with democratic values.


