Several experts worldwide called for a moratorium in the development of powerful artificial intelligence (AI) systems this week to ensure their safety for humanity. The public letter has been signed by over a thousand people, including Elon Musk, Apple co-founder Steve Wozniak, and the Estonian Skype co-founder Jaan Tallinn.
The letter states, “AI systems with the ability to compete with humans could pose a profound danger to society and humanity.” It also mentions that AI developers need to collaborate with policymakers to significantly accelerate the oversight of AI systems and large data sets. It is also necessary to apply origin and labelling systems to help distinguish between real and synthetic data.
The European Union has been preparing an AI regulation (the AI Act) for two years, which aims to regulate the development and use of AI. The regulation is expected to come into effect in the second half of 2025.
Most people probably did not anticipate that AI development would be so rapid and that the need for regulations would become so critical. However, will the planned EU regulation help alleviate the fears and concerns mentioned in the public letter?
Partially, yes. For example, companies using AI will be required to provide a notification that their service is AI-driven. This will apply to systems that:
i) communicate with people,
ii) are used for emotion recognition based on biometric data or to determine social categories, or
iii) create or modify content (deepfakes).
Additionally, people must be informed if they are interacting with an AI system or if their emotions or traits are being detected using automated means.
The EU will prohibit using AI systems that aim to distort human behaviour and are likely to cause physical or psychological harm. Similarly, AI systems that use subliminal techniques aimed at threshold perception that humans cannot perceive or that exploit vulnerabilities based on age, physical or mental ability, or other factors will also be prohibited.
AI regulation will require thorough testing and certification of products based on high-risk AI. This means that AI developers must provide evidence that such products are safe and reliable.
The letter expresses concern about non-human intelligence that could eventually surpass, outsmart, and replace humans, as well as the collapse of civilisation. The AI regulation will not provide a solution for these concerns; someone else must do that.« Back to articles