DIGITAL LIFE

The internet is the land of robots
If you enjoy browsing the web to find content that distracts you, informs you about a brand, or keeps you up-to-date on the latest news or gossip, it's quite possible you're reading content made by robots. And that doesn't mean you're different from anyone else. Today, a significant portion of the texts, videos, and even music consumed online is produced by Artificial Intelligence (AI).
In October 2025, a study by Graphite, a company that develops code using AI, demonstrated that the amount of text content produced by tools like English-language GPT Chat had already surpassed human-generated content. Analyzing 65,000 URLs published between January 2020 and May 2025, randomly selected, researchers were able to detect which ones came from AI. In total, they represented 52%, compared to 48% of texts written by humans.
Other data reinforces this picture. Another company working with AI, Originality, analyzed 8,795 LinkedIn posts from January 2018 to October 2024, and detected a 189% increase in AI-written posts since the launch of ChatGPT. By the end of 2024, 54% of long-form posts on the platform were written by AI. Again, the study was conducted using English-language texts.
On Medium, a blogging site also popular in Brazil, the amount of AI-generated content increased from less than 2% to 37% between 2022 and 2024, according to a study by the University of Hong Kong published in 2025.
The proliferation of AI-generated content has even been a headache for platforms, which are trying to block new creations. Spotify imposed new rules last September that include more powerful spam filters and a ban on artist impersonation, as well as requiring musicians to be transparent if they use AI.
This is a serious problem and perhaps impossible to solve. A Guardian report pointed out that 10% of the fastest-growing YouTube channels publish only videos made by AI. Months later, in November, TikTok revealed that there are more than 1 billion synthetic videos – created by robots – on the platform, a result of the launch of tools that made creating an AI video trivial, such as OpenAI's Sora and Google's Veo 3. To try to stem this flood, TikTok promised to allow users to decide how much synthetic content they want to see. But who will spend time reading the fine print and managing these permissions?
The new phenomenon even has a name in English, "AI slop," meaning low-quality synthetic content. Initially referring to the mixture of food scraps used to feed pigs, the term has the same meaning as "spam," those email chains, or junk mail. Not surprisingly, the word "slop" was chosen as the word of 2025 by the Merriam-Webster dictionary and the American Dialect Society.
So, the robotization of content on digital platforms is already leading to a substantial change in their use. There is already a change, for example, regarding the "social" aspect; in the future, no one will frequent social networks to see friends – with them, the use of private conversation tools, such as messaging, less dependent on algorithmic curation, should be maintained.
After all, having human friends still matters.
Secondly, everyone will know that what appears on social networks can be a lie or "rubbish" – reliability tends to fall, and thus even the harmful potential of large waves of fake news can be reduced.
Thirdly, people will look for other places to find out where to get real information. And then, of course, we return to good old journalism.
TV channels, newspapers, and websites that demonstrate independence, reputation, and the ability to show how they arrived at certain facts will be a valuable commodity. The same should happen with original artistic and cultural creations, artists who create music, films, plays, researchers who write essays with original perspectives – human content, originality, and never-before-seen insights will be valuable content.
How they reach the public is another story. Today, most communication between news outlets and artists and their audience is mediated by Big Tech. Subscription-based solutions, such as the paid model adopted by streaming services, provided they are well-regulated to guarantee compensation for those who create knowledge and a quantity of local content, could be a way out. But radio, TV, offline experiences, direct interaction, and even reading on paper could have a renaissance in the age of AI-fueled internet chaos.
How the internet and its robots are sabotaging scientific research...Just a few decades ago, researchers in the fields of psychology and health always needed to interact with people in person or by phone. At worst, they sent questionnaires by mail and waited for handwritten responses.
Thus, we either met research participants in person, or we had several points of evidence corroborating that we were dealing with a real person who, therefore, would likely tell us the truth about themselves.
Since then, technology has done what it always does: created opportunities to reduce costs, save time, and access a larger group of participants via the internet. But what most people haven't fully realized is that internet research has brought risks of data corruption or identity theft, which can be deliberately aimed at compromising scientific research projects.
What most excited scientists about internet research was the ability to access people we wouldn't normally be able to involve in studies. For example, as more people could afford to access the internet, poorer people were able to participate, as well as those from rural communities who might be many hours and several modes of transportation away from our laboratories.
Technology then leaped forward in a very short period of time. The democratization of the internet opened it up to more and more people, and Artificial Intelligence (AI) grew in penetration and technical capacity. So, where are we now?
As members of an international interest group that analyzes fraud in research (Fraud Analysis in Internet Research, or FAIR), we realize that it is now more difficult than ever to identify whether someone is real. There are companies that scientists can pay to provide participants for research via the internet, and they, in turn, pay the participants.
Although they have checks and balances to reduce fraud, it is probably impossible to eradicate it completely. Many people live in countries where the standard of living is low, but the internet is available. If they sign up to “work” for one of these companies, they can earn a reasonable amount of money this way, possibly even more than in jobs involving hard labor and long hours in unhealthy or dangerous conditions.
This isn't a problem in itself. But there will always be the temptation to maximize the number of studies they can participate in, and one way to do this is to pretend to be relevant and eligible for a larger number of studies. There is likely to be manipulation of the system, and some of us have already seen indirect evidence of this (people with an extraordinarily high number of concurrent illnesses, for example).
It's not feasible (or ethical) to insist on requesting medical records, so we trust that a person with heart disease in one study is also eligible to participate in a cancer study because they also have cancer, in addition to anxiety, depression, blood disorders, or migraines, and so on. Or all of these. Without requiring medical records, there is no easy answer to how to exclude these people.
More insidiously, there will also be people who use other individuals to circumvent the system, often against their will. Only now are we beginning to consider the possibility of this new form of slavery, the extent of which is largely unknown.
Bots...Similarly, we are seeing the emergence of robots (bots) that pretend to be participants, answering questions in increasingly sophisticated ways. Multiple identities can be fabricated by a single programmer, who can then not only make a lot of money from studies, but also seriously compromise the science we are trying to do (which is very worrying when studies are open to political influence).
It is becoming much more difficult to identify Artificial Intelligence. There was a time when written interview questions, for example, could not be answered by AI, but now they can.
It is literally only a matter of time before we find ourselves conducting and recording online interviews with a visual representation of a living, breathing individual that simply does not exist, for example, through deepfake technology.
The TV series The Capture...The TV series The Capture highlights the growing problem of deepfakes. We are only a few years, if not months, away from such deepfakes. The British TV series The Capture may seem exaggerated to some, with its depiction of real-time fake news on TV, but anyone who has seen the current state of the art in AI can easily imagine that we are just a few steps away from their portrayals of the “evils” of identity theft using perfect avatars extracted from real data. It’s time to worry.
The only answer, for now, will simply be to conduct face-to-face interviews, in our offices or laboratories, with real people we can look in the eye and shake hands with. We will have gone back in time to the point mentioned earlier, a few decades ago.
mundophone