From Silicon Valley to the UN, the issue of attributing blame when AI fails is no longer an esoteric regulatory issue, but an issue of geopolitical significance.
This week The Secretary-General of the United Nations asked this questionhighlighting an issue central to the discussion on the ethics and regulation of artificial intelligence. He asked who should be held responsible when AI systems cause harm, discriminate or spiral beyond human intentions.
These comments served as a clear warning to national leaders, as well as tech industry executives, that AI's capabilities outweigh regulation because previously reported.
But it wasn't just the warning that was unusual. That was the tone too. There was a sense of irritation.
Even desperation. If AI-powered machines are being used to make decisions about life and death, livelihoods, borders and security, someone can't just pass by saying it's too complicated.
The Secretary-General said responsibility “should be shared between developers, implementers and regulators.”
This view resonates with long-held suspicions within the UN about the unbridled technological power that has permeated UN deliberations on digital governance and human rights.
This time is important. As governments try to develop AI regulations at a time when technology is changing so rapidly, Europe has already taken the lead in passing ambitious regulations that will apply to high-risk AI products, setting a regulatory standard that will likely act as a beacon – or cautionary tale – for other countries
But honestly: the regulations written on the website will not change the power dynamics. The Secretary-General's words reach the world in the face of artificial intelligence, which is now used in immigration verification, predicting police activity, creditworthiness and military elections.
Civil society warns about the dangers of artificial intelligence in the absence of accountability. He will be the perfect scapegoat for human decision-making, with very human consequences: “the algorithm made me do it.”
We should also mention that there is also a geopolitics issue that is hardly discussed: what happens if one country's AI explainability regulations are inconsistent with those of a neighboring country?
What happens when artificial intelligence crosses the limits? Can we talk about AI export rights? Antonio Guterres, UN Secretary-General, spoke about the need for universal guidelines for the development and use of artificial intelligence, as is the case with nuclear and climate regulations.
And this is not an easy task in a world where international relations and international agreements are disintegrating, leading to a situation of complete deregulation.
My interpretation? This wasn't about diplomacy. It was a draw speech. It wasn't a complicated message, even if it is a complicated problem to solve: AI isn't immune from responsibility just because it's smart, fast, or profitable.
There must be an entity to which it is accountable for its results. The more time the world spends deciding what this entity will be, the more painful and complex that decision will become.

















