EU Seeks 'Responsible' AI To Dispel Big Brother Fears
The EU unveiled its strategy for artificial intelligence on Wednesday as it seeks to catch up with China and the US and dispel fears of Big Brother-like control.
The EU said building trust would be a guiding principle, with higher-risk uses of AI in health, security or transport facing stricter demands on transparency and human oversight. Lower-risk applications would be largely left alone.
The other ambition will be to offer companies and universities access to the mountain of data that drives AI -- with the bloc considering forcing tech giants to share data or face sanctions.
"We want the application of these new technologies to deserve the trust of our citizens," European Commission President Ursula von der Leyen told reporters.
"This is why we are promoting a responsible, human-centric approach to artificial intelligence."
EU officials are eager to define the rules of AI and push their champions, acknowledging that Europe and its companies have been outflanked by Silicon Valley's Google, Facebook and Apple, as well as Chinese players like Tencent.
"It's not us that need to adapt to today's platforms. It's the platforms that need to adapt to Europe," the EU's Industry Commissioner Thierry Breton told a news conference.
"The battle for industrial data starts now and Europe will be the main battlefield. Europe has everything it needs to be a leader."
The proposals are the first step in a long road to legislation, with Brussels hoping for draft laws by the end of the year.
But the far-ranging plans will face furious lobbying from corporate giants and governments and will require ratification by the European Parliament.
"Artificial intelligence is not good or bad in itself. It all depends on why and how it is used," said the EU Commission's executive vice president on digital policy, Margrethe Vestager.
The commission, the EU's executive arm, will seek to repeat the impact of GDPR -- its regulation on data protection that has become a global standard.
Corporate lobbies welcomed the hands-off approach to lower-risk applications of AI, relieved that Brussels was stepping back from blanket regulation.
"We support the targeted and risk-based approach," said Cecilia Bonefeld-Dahl, the head of DigitalEurope, a tech lobby.
"It will be important to keep new regulation focused and limited to truly high-risk cases."
Christopher Padilla, an IBM vice president, urged "precision regulation" that applied "different rules for different levels of risk".
This would ensure "businesses and consumers have trust in technology", he said.
EU officials refrained from asking for curbs on facial recognition, one of the most controversial examples of artificial intelligence.
For now, they said existing legislation already limits its uses, but the bloc will start a debate on the topic to determine where European citizens would accept it.
© Copyright AFP 2024. All rights reserved.