The Impact of Artificial Intelligence on Society: A Double-Edged Sword
The rapid advance of Artificial Intelligence has pushed the world to the brink of a technological revolution that will affect most of the world’s eight billion people.
AI has been shown to save lives in the healthcare industry even as it raises the specter of killer robots and out of control nukes in the military. It raises a question of crucial importance: will AI improve our existence or is it an existential threat?
The answer is both and the debate tends to pitch techies against techies.
Although few will admit it, the tens of thousands of people who work on AI in the tech industry, which now employs more than nine million people in companies like Google, Open AI and Anthropic, don’t themselves know how it will all play out.
Media coverage on AI has tended to focus on applications like ChatGPT, frequently used by students to write essays, and on AI-aided Internet postings to spread misinformation and disinformation. Then there are “deep fakes” that mimic the voice and appearance of a person.
AI can mislead us.
Early this year, an AI-generated robocall with U.S. President Joe Biden’s voice urged voters in New Hampshire not to vote in state elections.
The tempo of the debate on where AI will take mankind accelerated sharply since a nonprofit organization little known outside the technology community, the San Francisco-based Center for AI Safety issued a blunt, one-sentence statement a year ago.
It said: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”
That urgent call to take the potential impact of AI as seriously as nuclear war was signed by more than 350 researchers, engineers and top executives from the leading companies working in AI. The signatories included Geoffrey Hinton and Yoshua Bengio, two Canadian scientists often called godfathers of advanced AI for their pioneering work on artificial neural networks.
The parallel with the development of nuclear weapons rang alarm bells outside the tech world. Warren Buffet, the multibillionaire investor with a reputation for sage judgment said at the annual meeting of his Berkshire Hathaway company: “We let the genie out of the bottle when we developed nuclear weapons. AI is somewhat similar — it’s part way out of the bottle.”
Political leaders and scientists around the world also heard the alarm bell.
Ignoring the dangers posed by AI
China and the United States, the two countries thought to have the largest array of AI tools, have paid relatively little public attention to the potential hazards of AI.
But on 6 May, the administration of Joe Biden made a surprise announcement: U.S. and Chinese diplomats plan to begin what a New York Times analysis termed “the first, tentative arms control talks over the use of artificial intelligence.”
Britain took action much earlier. Just five months after the Center for AI Safety’s “risk of extinction” warning, the British government convened an AI summit attended by representatives of 28 countries at Bletchley Park, site of the World War II facility where British scientists broke the code Nazi Germany used for military communications.
The summit ended with a lengthy communiqué noting the potential risks of AI, particularly in cybersecurity and biotechnology, and urged international cooperation and a global dialogue to better understand the impact of AI on societies around the world.
Curiously, the Bletchley Declaration produced by the summit made no explicit mention of Artificial Intelligence in war, a sensitive subject that has been under discussion by military leaders for at least two decades of steadily accelerating progress on developing Lethal Autonomous Weapons, or LAWs, better known as killer robots.
Studying what’s at stake
In the last two days of April, a meeting convened by the Austrian government spelt out what is at stake in unambiguous terms. The conference was entitled “Humanity at the Crossroads: Autonomous Weapons Systems and the Challenge of Regulation.” It brought together representatives of 143 countries, mostly from non-governmental and international organizations.
“Now is the time to agree on international rules and norms to ensure human control, “ Austrian Foreign Minister Alexander Schallenberg told the meeting. “At least let’s make sure that the most profound and far-reaching decision, who lives and who dies, remains in the hands of humans and not of machines.”
“This is our generation’s ‘Oppenheimer moment’ where geopolitical tensions threaten to lead a major scientific breakthrough down a very dangerous path for the future of humanity,” said the summary at the end of the April 29-30 conference.
The reference was to Richard Oppenheimer, the U.S. physicist who led the project to develop the atomic bomb, the first two of which the United States dropped on the Japanese cities of Hiroshima and Nagasaki. A biographical movie on Oppenheimer broke box office records last summer and won seven Oscars in March.
The reference to life-and-death decisions remaining in the hands of humans reflects fears that artificial intelligence could give weapons systems the capability to make decisions themselves after processing information fed to them.
Developing safeguards
In October 2023, Secretary General of the United Nations António Guterres and President of the International Committee for the Red Cross Mirjana Spoljaric Egger called on political leaders to establish new international rules on autonomous weapons systems by 2026.
This is an extremely ambitious goal, more aspirational than it is based on reality. It brings to mind the Nuclear Non-Proliferation Treaty which entered into force in 1970 after years of arduous negotiations by a small army of experts, lawyers and government leaders. It was hailed a success and there are now 191 countries party to it.
However, the three countries with the largest nuclear arsenals — the United States, United Kingdom and France — never accepted it. Neither did India, Pakistan, North Korea or Israel.
For a glimpse on horrifying consequences of unchecked development of killer robots, the Future of Life Institute which has branches in Belgium and the United States, has produced a mock sci-fi documentary some experts say comes closer to reality than Hollywood movies such as The Terminator.
Donors to the work of the institute include Elon Musk, who gave $10 million. The billionaire entrepreneur’s interest in artificial intelligence stems from his ambition to eliminate flaws from his AI-using driverless Tesla cars which have been involved in a number of lethal accidents.
The benefits of AI
Musk is particularly bullish on the future of AI. “We’ll have Artificial Intelligence that is smarter than any one human probably around the end of the year, ” he said recently in an interview first reported by the Financial Times.
On the bright side, AI has been a blessing in a number of fields, in particular healthcare.
Using deep-learning algorithms, it has been proving effective in detecting existing cancers and predict the development of cancers of the liver, rectum and prostate with 94% accuracy, according to new research by America’s Mount Sinai hospital group.
When you ask people in the United States what comes to mind when they hear the phrase Artificial Intelligence, the answer is more frequently “jobs” than killer robots.
Worries about the AI-driven technological revolution and its impact on the global economy are shared by deeply knowledgeable leaders in finance and the global economy.
Can we protect jobs?
Introducing a new analysis by the International Monetary Fund (IMF) early this year, its managing director, Kristalina Georgieva, called the findings “striking.”
“Almost 40% of global employment is exposed to AI,” Georgieva said. “Historically, automation and information technology have tended to affect routine tasks but one of the things that sets AI apart is its ability to impact high-skilled jobs.”
There are no estimates on how many of these jobs will disappear and how many high-skilled workers will benefit by using AI to complement their work and thus boost productivity. “In most scenarios,” the IMF found, “AI will likely worsen overall inequality.”
Other neutral voices from outside the technology community, such as Buffett, are taking a wait and see approach.
“It has enormous potential for good and enormous potential for harm,” Buffett said when asked early in May 2024 how he saw AI. “And I just don’t know how that plays out.”
Overall, the rapid advance of Artificial Intelligence presents both opportunities and challenges for society, and it is crucial for global leaders to come together to address the potential risks and benefits of this technology.