MultiversX Tracker is Live!

AI usage on social media has potential to impact voter sentiment

The Cointelegraph ​

Cryptocoins News / The Cointelegraph ​ 60 Views

The U.S. presidential election is nearing, and with it comes the use of tech like AI on social media platforms to manipulate voter sentiment.

The use of artificial intelligence (AI) in social media has been targeted as a potential threat to impact or sway voter sentiment in the upcoming 2024 presidential elections in the United States. 

Major tech companies and U.S. governmental entities have been actively monitoring the situation surrounding disinformation. On Sept. 7, the Microsoft Threat Analysis Center, a Microsoft research unit, published a report claiming “China-affiliated actors” are leveraging the technology.

The report says these actors utilized AI-generated visual media in a “broad campaign” that heavily emphasized “politically divisive topics, such as gun violence, and denigrating U.S. political figures and symbols.”

It says it anticipates that China “will continue to hone this technology over time,” and it remains to be seen how it will be deployed at scale for such purposes.

On the other hand, AI is also being employed to help detect such disinformation. On Aug. 29, Accrete AI was awarded a contract by the U.S. Special Operations Command to deploy artificial intelligence software for real-time disinformation threat prediction from social media.

Prashant Bhuyan, founder and CEO of Accrete, said that deep fakes and other “social media-based applications of AI” pose a serious threat.

“Social media is widely recognized as an unregulated environment where adversaries routinely exploit reasoning vulnerabilities and manipulate behavior through the intentional spread of disinformation.”

In the previous U.S. election in 2020, troll farms reached 140 million Americans each month, according to MIT. 

Troll farms are an “institutionalized group” of internet trolls with the intent to interfere with political opinions and decision-making.

Related: Meta’s assault on privacy should serve as a warning against AI

Regulators in the U.S. have been looking at ways to regulate deep fakes ahead of the election. 

On Aug. 10, the U.S. Federal Election Commission unanimously voted to advance a petition that would regulate political ads using AI. One of the commission members behind the petition called deep fakes a “significant threat to democracy.”

Google announced on Sept. 7 that it will be updating its political content policy in mid-November 2023 to make AI disclosure mandatory for political campaign ads.

It said the disclosures will be required where there is “synthetic content that inauthentically depicts real or realistic-looking people or events.”

Collect this article as an NFT to preserve this moment in history and show your support for independent journalism in the crypto space.

Magazine: Should we ban ransomware payments? It’s an attractive but dangerous idea


Get BONUS $200 for FREE!

You can get bonuses upto $100 FREE BONUS when you:
💰 Install these recommended apps:
💲 SocialGood - 100% Crypto Back on Everyday Shopping
💲 xPortal - The DeFi For The Next Billion
💲 CryptoTab Browser - Lightweight, fast, and ready to mine!
💰 Register on these recommended exchanges:
🟡 Binance🟡 Bitfinex🟡 Bitmart🟡 Bittrex🟡 Bitget
🟡 CoinEx🟡 Crypto.com🟡 Gate.io🟡 Huobi🟡 Kucoin.



Comments