AI Risks 'Disaster' Without 'Cast-iron Guarantees': Expert
Artificial intelligence (AI) systems must come with "cast-iron guarantees" against mass harm to humans, especially as the likelihood of their integration into weapons grows, a leading expert has told AFP.
Stuart Russell, Berkeley computer science professor and co-director of the International Association for Safe and Ethical AI (IASEAI), will be in Paris Thursday for scientific talks in the run-up to a global summit on AI technology on February 10-11.
US tech giant Google appears to have walked back its commitment to avoid working on AI-powered weapons and surveillance systems. What was your reaction?
Stuart Russell:
Now (Google) says they're willing to override the views of their employees, also the views of the vast majority of the public, who are also opposed to the use of AI in weapons.
Why might Google have made this change?
SR:
It's not a coincidence that this change in policy comes with a new administration that has removed all the regulations on AI that were placed by the Biden administration and is now placing a huge emphasis on the use of AI for military prowess.
What are the main dangers of using AI in weapons?
SR:
(Such weapons) could be used in much more dangerous and harmful ways. For example, "kill anyone who fits the following description". And that description could be by age, by gender, by ethnic group, by religious affiliation, or even a particular individual.
Will AI be increasingly integrated into future weapons systems?
SR:
Ukraine has been an accelerator... that conflict has forced these weapon systems to evolve very quickly. And everyone else is looking at this.
It's quite possible that the next major conflict after Ukraine will be fought largely with autonomous weapons in a way that is currently unregulated. So we can only imagine the kinds of devastation and horrific impacts on civilians that might occur as a result.
But on the other hand, there are more than 100 countries that have already stated their opposition to autonomous weapons. And I think there's a good chance that we'll achieve the necessary majority in the United Nations General Assembly to have a resolution calling for a ban.
Should AI in general be more tightly regulated?
SR:
Governments must require cast-iron guarantees in the form of either statistical evidence or mathematical proof that can be inspected, that can be checked carefully. And anything short of that is just asking for disaster.
© Copyright AFP 2024. All rights reserved.