AI tech 'more dangerous than an AR-15,' can be twisted for 'malevolent power,' expert warns

The accessibility of artificial intelligence technology will prove most significant in the impact that the technology has on the way world politics and conflicts play out.

The accessibility of artificial intelligence (AI) will change the international landscape to empower "bad actor" strongman regimes and lead to unprecedented social disruptions, a risk analysis expert told Fox News Digital.

"We know that when you have a bad actor, and all they have is a single-shot rifle as opposed to an AR-15, they can't kill as many people, and the AR-15 is nothing compared to what we are going to see from artificial intelligence, from the disruptive uses of these tools," said Ian Bremmer, founder and president of political risk research firm Eurasia Group. 

In referencing improved capabilities for autonomous drones and the ability to develop new viruses, among others, Bremmer said that "we've never seen this level of malevolent power that will be in the hands of bad actors." He said AI technology that is "vastly more dangerous than an AR-15" will be in the hands of "millions and millions of people."

"Most of those people are responsible," Bremmer said. "Most of those people will not try to disrupt, to destroy, but a lot of them will."

BIDEN'S TEAM IS GIVING AWAY OUR GLOBAL AI LEADERSHIP IN WAR TO ADVANCE PROGRESSIVE AGENDA

The Eurasia Group earlier this year published a series of reports that outlined the top risks for 2023, listing AI at No. 3 under "Weapons of Mass Disruption." The group listed "Rogue Russia" as the top risk for the year, followed by "Maximum Xi [Jinping]," with "Inflation Shockwaves" and "Iran in a Corner" behind AI – helping frame the severity of the risk AI can pose.

Bremmer said he is an AI "enthusiast" and welcomes the great changes the technology could create in health care, education, energy transition and efficiency, and "just about any scientific field you can imagine" over the next five to 10 years. 

He highlighted, though, that AI also presents "immense danger" with great potential for increased misinformation and other negative effects that would "propagate … in the hands of bad actors."

For example, he noted, there are currently only about "a hundred people in the world with the knowledge and technology to create a new smallpox virus," but similar knowledge or capabilities might not remain so guarded with the potential of AI.

WHAT ARE THE DANGERS OF AI?

"There is no pause button," Bremmer said. "These technologies will be developed, they will be developed quickly by American firms and will be available very widely, very, very soon."

"There is no one specific thing that I’m saying, ‘Oh, the new nuclear weapon is X,’ but it’s more that these technologies are going to be available to almost anyone for very disruptive purposes," he added.

[NOTE: If you were to ask ChatGPT how to make smallpox, it will refuse and say that it can’t assist because creating or distributing harmful viruses or engaging in any illegal or dangerous activity is strictly prohibited and unethical.]

A number of experts have already discussed the potential for AI to strengthen rogue actors and nations with more totalitarian governments, such as those in Iran and Russia, but technology has in recent years played a key role in allowing protesters and anti-government groups to make strides against their oppressors.

HOW US, EU, CHINA PLAN TO REGULATE AI SOFTWARE COMPANIES

Through the use of new chat apps like Telegram and Signal, protesters have been able to organize and demonstrate against their governments. China was unable to stop the flood of video media that showed protests in various cities as residents became fed up with the government's "zero COVID" policies, forcing Beijing to flood Twitter with posts about porn and escorts in an attempt to block unfavorable news. 

Bremmer remains wary that the technology might prove as helpful for the underdog, saying instead that it will help in cases where the government is "weak" but will prove dangerous "in places where governments are strong."

"Remember, the Arab Spring failed," Bremmer said. "It was very different from the revolutions we saw before that in places like Ukraine and Georgia, and part of the reason for that is because governments in the Middle East were able to use surveillance tools to identify and then punish those that were involved in opposing the state."

"So, I do worry that in countries like Iran and Russia and China, where the government is comparatively strong and has the ability to actually, effectively surveil their population using these technologies, the top-down properties of AI and other surveillance technologies will be stronger in the hands of a few actors than it will be in the hands of the average citizen."

"The communications revolution empowered people and democracies at the expense of authoritarian regimes," he continued. "The data revolution, the surveillance revolution, which I think actually is expanded by AI, actually empowers technology companies and governments that have access to and control of that data, and that's a concern."

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.