From August 2, 2025, general-purpose artificial intelligence (GPAI) providers in the EU must comply with the AI Act, which aims to ensure safe and ethical AI use through a risk-based regulation framework. Key provisions require up-to-date technical documentation and training data summaries. Legal experts express concerns over the legislation's vagueness, leading to potential penalties for unintentional non-compliance. The Act lacks focus on addressing bias and harmful AI outputs and particularly strains tech startups due to its demanding requirements.
The AI Act outlines EU-wide measures aimed at ensuring that AI is used safely and ethically. It establishes a risk-based approach to regulation that categorises AI systems based on their perceived level of risk to and impact on citizens.
In theory, 2 August 2025 should be a milestone for responsible AI. In practice, it's creating significant uncertainty and, in some cases, real commercial hesitation.
Provisions of GPAI models require maintaining up-to-date technical documentation and summaries of training data, while unclear legislation exposes GPAI providers to IP leaks and penalties.
Ambiguity in the AI Act creates issues for GPAI providers as disclosing too much detail could risk revealing valuable IP or triggering copyright disputes.
Collection
[
|
...
]