If you search for” shrimp Jesus” on Facebook, you might come across dozens of images of artificial intelligence ( AI)-generated crustaceans muddled together in various forms with a stereotypical image of Jesus Christ.
Some of these super- realistic pictures have garnered more than 20, 000 likes and comments. What precisely is happening around, then?
The “dead online idea” has an description: AI and bot- generated content has surpassed the people- generated online. But where did this notion originate, and is there any truth behind it?
What is the dead online concept?
According to the useless online theory, artificial intelligence agents are primarily responsible for creating and automating social media accounts and activity on the internet.
These agencies can quickly create content alongside AI- generated graphics designed to land wedding ( clicks, likes, feedback ) on platforms such as Facebook, Instagram and TikTok. Regarding the crab Jesus, it seems as though AI has learned that it’s the newest and most recent fusion of irony and religious imagery.
The dead online theory, however, extends yet further. Many of the records that post for content also appear to be run by arbitrary knowledge experts. This leads to a vicious cycle of synthetic commitment, which is both artificial and non-existent and has no clear goals.
Harmless proposal- farming or advanced propaganda?
The motivation behind these addresses ‘ efforts to pique interest may seem obvious at first glance: social media wedding generates advertising revenue. A person may receive a portion of the advertising revenue from social media platforms like Meta if they set up an account that receives raised engagement.
Does the deceased online theory only apply to safe engagement farming? Or maybe beneath the surface lies a advanced, nicely- funded attempt to support authoritarian regimes, attack opponents and distributed propaganda?
There is a potential longer-term ploy at hand, but the shrimp Jesus phenomenon may seem harmless ( albeit bizarre ).
The great follower count helps these AI-driven accounts become more popular ( many fake, some real ), proving that the account is legitimate to real people. This means that out there, an troops of addresses is being created. Accounts with great follower counts might be chosen by those who have the highest bid.
This is crucial because social media is now the main source of news for some users all over the world. In Australia, 46 % of 18 to 24- yr- adults nominated social advertising as their primary source of news next year. This is up from 28 % in 2022, supplanting traditional media outlets like radio and TV.
Bot- fueled deception
There is already strong evidence that these inflated machines are using social media to influence public opinion with false information, and it has been doing this for decades.
In 2018, a research analyzed 14 million posts over a ten- quarter time in 2016 and 2017. It discovered that machines on social media were actively involved in distributing content from unsatisfactory sources. Accounts with large numbers of followers were legitimizing propaganda and disinformation, leading actual users to think, engage and reshare bot- posted content.
Following the mass shootings that took place in the United States, it has been discovered that this method of social media adjustment exists. According to a study conducted in 2019, bot-generated posts on X ( previously Twitter ) significantly influence public discussion and serve as a amplify or denigrate potential narratives related to extreme events.
More lately, many huge- scale, pro- Soviet disinformation campaigns have aimed to destroy support for Ukraine and promote pro- Soviet sentiment.
Identified by activists and journalists, the coordinated attempts used bots and AI to produce and distribute false information, reaching millions of social media users.
The campaign quickly posted tens of thousands of messages of pro-Kremlin content attributed to US and European celebrities who appeared to be supporting the ongoing conflict against Ukraine using more than 10,000 bot accounts on X alone.
This scale of influence is significant. According to some reports, bots accounted for nearly half of all internet traffic in 2022. With recent advancements in generative AI like Google’s Gemini and OpenAI’s ChatGPT models, the quality of fake content will only be improving.
Social media companies are attempting to stop people from using their platforms. Notably, Elon Musk has thought about making X users pay for membership in order to stop bot farms.
Social media giants have the ability to take down large amounts of bot activity if they so choose. ( Bad news for Jesus, our amiable shrimp. )
Keep in mind the dead internet.
The dead internet theory does not genuinely hold that the majority of your online interactions are fake.
It is, however, an interesting lens through which to view the internet. The idea that the internet we knew and loved is “dead” is no longer for humans, but by humans.
Its power was primarily based on the freedom to express and exchange ideas on social media and the internet. Naturally, it is this power that bad actors are seeking to control.
The dead internet theory serves as a reminder to be skeptical and to use critical thinking when using social media and other websites.
Any interaction, trend and especially “overall sentiment” could very well be synthetic. designed to slightly alter your perception of the world.
Vlada Rozova is a Research Fellow in Applied Machine Learning at The University of Melbourne, while Jake Renzella is a Lecturer and Director of Studies ( Computer Science ) at UNSW Sydney.
This article was republished from The Conversation under a Creative Commons license. Read the original article.