Wednesday, January 14, 2026


DIGITAL LIFE


The power of fake news: Document mentions antiparasitic drugs such as ivermectin, mebendazole, and fenbendazole for the treatment of any type of cancer

WARNING: There is no evidence of a new “global protocol for cancer treatment” based on antiparasitic drugs such as ivermectin, mebendazole, and fenbendazole...https://maldita.es/malditaciencia/20260112/se-publica-el-primer-protocolo-mundial-para-el-tratamiento-del-cancer-ya-revisado-por-pares-de-ivermectina-mebendazol-y-fenbedazol/

Currently, there is no evidence to suggest that ivermectin, mebendazole, or fenbendazole, three antiparasitic drugs, are effective in treating any type of cancer. In fact, while ivermectin and mebendazole are used in humans to treat infections caused by some parasites, fenbendazole is a veterinary drug and its use in humans is not approved in the European Union.

The document mentioned in the claims that these three drugs would form a new “global protocol for cancer treatment” was published in September 2024 in the journal Orthomolecular Medicine. Despite the peer review process, this journal is not indexed in Medline, the biomedical literature database of the U.S. National Library of Medicine. This means that Medline does not consider the publication to meet the quality standards necessary to validate the results of the published work. Quackwatch, a website that alerts users about fraud, myths, fads, fallacies, and misconduct in the healthcare sector, classifies the journal as "not recommended."

COVID-19 and Chloroquine...Chloroquine was widely promoted as a supposedly effective treatment against COVID-19 by some governments around the world and on all social media (without exception) at the time. In Brazil, for example, the government presided over by Jair Bolsonaro even ordered the manufacture of chloroquine...Private pharmaceutical laboratories and the Army's Chemical Pharmaceutical Laboratory (LQFEx) manufactured chloroquine in Brazil. Production was significantly expanded in 2020, during the COVID-19 pandemic, at the initiative of the federal government, despite the lack of proven efficacy of the drug for the treatment of the disease.

Study that justified the use of chloroquine for COVID-19 is invalidated by the journal that published it...After more than four years of controversy, the study by French infectologist Didier Raoult on the use of chloroquine and hydroxychloroquine, which justified the indication of this drug against COVID-19, was invalidated by the editor of the scientific journal that published it.

Hydroxychloroquine and chloroquine are drugs used against malaria and are synthetic derivatives of the same substance, called quinine. But the molecules gained notoriety in February 2020, when the Covid-19 pandemic began, for their possible effectiveness in treating the disease.

The person responsible for promoting the drug was Dr. Didier Raoult, at the time head of the Institute of Infectious Diseases (IHU) at the University Hospital of Marseille, in southern France. The infectious disease specialist – retired since 2021 and recently banned from practicing medicine – never stopped claiming that hydroxychloroquine, combined with an antibiotic, azithromycin, was effective against the infection.

One of the founding studies of this theory, signed by eighteen authors, including Philippe Gautret, former professor at the IHU, and Didier Raoult, was published in March 2020 in the scientific journal International Journal of Antimicrobial Agents.

Elsevier, the journal's publisher, announced on Tuesday (17/12/24) the retraction of this article after in-depth research, with the support of an "impartial expert acting as an independent consultant on editorial ethics".

The publication questioned the non-compliance with several rules, but also the manipulation or problematic interpretation of results.

"Concerns were raised" about the journal editor's respect for "publication ethics", "the appropriate conduct of research involving human participants, as well as concerns raised by three of the authors regarding the methodology and conclusions," Elsevier explained in a lengthy explanatory note.

The publisher also states that the authors of the study did not argue convincingly in their defense. Over the years, Didier Raoult has made public several studies that showed, according to him, the effectiveness of hydroxychloroquine, but which were later criticized for methodological flaws (very small patient groups, lack of a control group, etc.) or ethical flaws (non-compliance with the rules for research on people, etc.)...https://rfi.my/BFbZ

mundophone

Tuesday, January 13, 2026


DIGITAL LIFE


AI 'CHEF' could help those with cognitive declines complete home tasks

In the United States, 11% of adults over age 45 self-report some cognitive decline, which may impact their ability to care for themselves and perform tasks such as cooking or paying bills. A team of Washington University in St. Louis researchers has integrated two novel vision-language models that create a potential artificial intelligence (AI) assistant that could help people remain independent.

Doctoral student Ruiqi Wang worked with Lisa Tabor Connor, associate dean and director of occupational therapy at WashU Medicine, and her team to collect video data of more than 100 individuals with and without subjective cognitive decline completing a task. By combining vision-language models to recognize human action and an algorithm to detect cognitive sequencing errors, they have taken a step toward creating a nonintrusive AI-based assistant for these individuals.

Wang works in the lab of Chenyang Lu, the Fullgraf Professor in computer science at the McKelvey School of Engineering and director of WashU's AI for Health Institute.

Results of their work about this system, named Cognitive Human Error Detection Framework with Vision-Language models (CHEF-VL), were published in the Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies in December and will be presented at UbiComp/ISWC 2026. The research led to a 2025 Google Ph.D. Fellowship for Wang in October and made him the first McKelvey Engineering student to receive this competitive honor.

Developing and testing the smart kitchen...Connor's team of occupational therapists was looking for a way to help people with mild cognitive decline by creating a tool that would support them without help from a human caregiver. Of the four tasks in the Executive Function Performance Tests—cooking, making a phone call, paying bills or taking medications—they chose to observe cooking.

For their experiment, Connor's team set up a smart kitchen equipped with an overhead camera. Each participant was given step-by-step instructions to make oatmeal on the stove. The camera captured the way the individual handled utensils, measured ingredients, and followed the sequence of instructions, which included gathering ingredients, boiling water, adding the oats, cooking the oats for two minutes, stirring, serving, then returning all dishes to the sink. 

Occupational therapy students were closely watching the order in which the actions were completed and provided supportive cues when participants made an error or when there were safety issues, such as the water boiling over the pot.

Wang said the CHEF-VL system first captured video of the individuals cooking, then used the team's AI models to analyze how the performance aligned with the given instructions.

"We realize even people without cognitive decline make mistakes during cooking, but this can be a very challenging task for those experiencing cognitive decline," Wang said.

How the AI model works and future plans..."The vision-language model is a state-of-the-art AI model that jointly understands text, images and videos. It demonstrates a strong off-the-shelf understanding of the real world, along with reasoning capabilities. This is exactly what we want in a smart kitchen, because the way people complete tasks can be diverse."

During the experiments, Connor's team coded the errors that the individuals made while making the oatmeal so they could cross-check the validity of the computer algorithm.

"We could see if the algorithm was working by what it was detecting and determine which errors were more difficult to detect, then work with Ruiqi and Chenyang's team to make adjustments," said Connor, who is also the Elias Michael Professor of Occupational Therapy and a professor of neurology.

Lu said this model exceeds the ability of paper-based cognitive tests, which do not necessarily reflect an individual's capability to perform these daily functions.

"The initial work for this was very hard to do, and I give a lot of credit to Ruiqi and the team," Lu said. "The game changer was the recent emergence of large vision-language models that can understand text and video. This is an excellent example of applying the cutting-edge AI to a vital health problem with tremendous public health impact."

Connor said there is more work to do to refine the model before it can be used in real-world situations.

"We're up for it," she said. "My whole lab is interested in this, and the beauty is our collaboration with the computer science team. We will all figure out what's next."

As they continue their work, Wang has a specific goal in mind.

"Looking into the future, we want to build this system that can support people to be more independent, remain in their home and boost their self-confidence, while also being beneficial to community health," he said. "This platform will be an initial step forward for future assistive technologies."

Provided by Washington University in St. Louis

 

DIGITAL LIFE


The internet is the land of robots

If you enjoy browsing the web to find content that distracts you, informs you about a brand, or keeps you up-to-date on the latest news or gossip, it's quite possible you're reading content made by robots. And that doesn't mean you're different from anyone else. Today, a significant portion of the texts, videos, and even music consumed online is produced by Artificial Intelligence (AI).

In October 2025, a study by Graphite, a company that develops code using AI, demonstrated that the amount of text content produced by tools like English-language GPT Chat had already surpassed human-generated content. Analyzing 65,000 URLs published between January 2020 and May 2025, randomly selected, researchers were able to detect which ones came from AI. In total, they represented 52%, compared to 48% of texts written by humans.

Other data reinforces this picture. Another company working with AI, Originality, analyzed 8,795 LinkedIn posts from January 2018 to October 2024, and detected a 189% increase in AI-written posts since the launch of ChatGPT. By the end of 2024, 54% of long-form posts on the platform were written by AI. Again, the study was conducted using English-language texts.

On Medium, a blogging site also popular in Brazil, the amount of AI-generated content increased from less than 2% to 37% between 2022 and 2024, according to a study by the University of Hong Kong published in 2025.

The proliferation of AI-generated content has even been a headache for platforms, which are trying to block new creations. Spotify imposed new rules last September that include more powerful spam filters and a ban on artist impersonation, as well as requiring musicians to be transparent if they use AI.

This is a serious problem and perhaps impossible to solve. A Guardian report pointed out that 10% of the fastest-growing YouTube channels publish only videos made by AI. Months later, in November, TikTok revealed that there are more than 1 billion synthetic videos – created by robots – on the platform, a result of the launch of tools that made creating an AI video trivial, such as OpenAI's Sora and Google's Veo 3. To try to stem this flood, TikTok promised to allow users to decide how much synthetic content they want to see. But who will spend time reading the fine print and managing these permissions?

The new phenomenon even has a name in English, "AI slop," meaning low-quality synthetic content. Initially referring to the mixture of food scraps used to feed pigs, the term has the same meaning as "spam," those email chains, or junk mail. Not surprisingly, the word "slop" was chosen as the word of 2025 by the Merriam-Webster dictionary and the American Dialect Society.

So, the robotization of content on digital platforms is already leading to a substantial change in their use. There is already a change, for example, regarding the "social" aspect; in the future, no one will frequent social networks to see friends – with them, the use of private conversation tools, such as messaging, less dependent on algorithmic curation, should be maintained.

After all, having human friends still matters.

Secondly, everyone will know that what appears on social networks can be a lie or "rubbish" – reliability tends to fall, and thus even the harmful potential of large waves of fake news can be reduced.

Thirdly, people will look for other places to find out where to get real information. And then, of course, we return to good old journalism.

TV channels, newspapers, and websites that demonstrate independence, reputation, and the ability to show how they arrived at certain facts will be a valuable commodity. The same should happen with original artistic and cultural creations, artists who create music, films, plays, researchers who write essays with original perspectives – human content, originality, and never-before-seen insights will be valuable content.

How they reach the public is another story. Today, most communication between news outlets and artists and their audience is mediated by Big Tech. Subscription-based solutions, such as the paid model adopted by streaming services, provided they are well-regulated to guarantee compensation for those who create knowledge and a quantity of local content, could be a way out. But radio, TV, offline experiences, direct interaction, and even reading on paper could have a renaissance in the age of AI-fueled internet chaos.

How the internet and its robots are sabotaging scientific research...Just a few decades ago, researchers in the fields of psychology and health always needed to interact with people in person or by phone. At worst, they sent questionnaires by mail and waited for handwritten responses.

Thus, we either met research participants in person, or we had several points of evidence corroborating that we were dealing with a real person who, therefore, would likely tell us the truth about themselves.

Since then, technology has done what it always does: created opportunities to reduce costs, save time, and access a larger group of participants via the internet. But what most people haven't fully realized is that internet research has brought risks of data corruption or identity theft, which can be deliberately aimed at compromising scientific research projects.

What most excited scientists about internet research was the ability to access people we wouldn't normally be able to involve in studies. For example, as more people could afford to access the internet, poorer people were able to participate, as well as those from rural communities who might be many hours and several modes of transportation away from our laboratories.

Technology then leaped forward in a very short period of time. The democratization of the internet opened it up to more and more people, and Artificial Intelligence (AI) grew in penetration and technical capacity. So, where are we now?

As members of an international interest group that analyzes fraud in research (Fraud Analysis in Internet Research, or FAIR), we realize that it is now more difficult than ever to identify whether someone is real. There are companies that scientists can pay to provide participants for research via the internet, and they, in turn, pay the participants.

Although they have checks and balances to reduce fraud, it is probably impossible to eradicate it completely. Many people live in countries where the standard of living is low, but the internet is available. If they sign up to “work” for one of these companies, they can earn a reasonable amount of money this way, possibly even more than in jobs involving hard labor and long hours in unhealthy or dangerous conditions.

This isn't a problem in itself. But there will always be the temptation to maximize the number of studies they can participate in, and one way to do this is to pretend to be relevant and eligible for a larger number of studies. There is likely to be manipulation of the system, and some of us have already seen indirect evidence of this (people with an extraordinarily high number of concurrent illnesses, for example).

It's not feasible (or ethical) to insist on requesting medical records, so we trust that a person with heart disease in one study is also eligible to participate in a cancer study because they also have cancer, in addition to anxiety, depression, blood disorders, or migraines, and so on. Or all of these. Without requiring medical records, there is no easy answer to how to exclude these people.

More insidiously, there will also be people who use other individuals to circumvent the system, often against their will. Only now are we beginning to consider the possibility of this new form of slavery, the extent of which is largely unknown.

Bots...Similarly, we are seeing the emergence of robots (bots) that pretend to be participants, answering questions in increasingly sophisticated ways. Multiple identities can be fabricated by a single programmer, who can then not only make a lot of money from studies, but also seriously compromise the science we are trying to do (which is very worrying when studies are open to political influence).

It is becoming much more difficult to identify Artificial Intelligence. There was a time when written interview questions, for example, could not be answered by AI, but now they can.

It is literally only a matter of time before we find ourselves conducting and recording online interviews with a visual representation of a living, breathing individual that simply does not exist, for example, through deepfake technology.

The TV series The Capture...The TV series The Capture highlights the growing problem of deepfakes. We are only a few years, if not months, away from such deepfakes. The British TV series The Capture may seem exaggerated to some, with its depiction of real-time fake news on TV, but anyone who has seen the current state of the art in AI can easily imagine that we are just a few steps away from their portrayals of the “evils” of identity theft using perfect avatars extracted from real data. It’s time to worry.

The only answer, for now, will simply be to conduct face-to-face interviews, in our offices or laboratories, with real people we can look in the eye and shake hands with. We will have gone back in time to the point mentioned earlier, a few decades ago.

mundophone

Monday, January 12, 2026

 

TECH


4 months, 40 hours, and a personal battle: why Silksong proves that the hardest game of 2025 is about enduring, not winning

In the second half of 2025, after years of waiting, Australian developer Team Cherry released the game Silksong, a sequel to 2017's Hollow Knight. Its theme is fighting to survive in an underground kingdom corrupted by insects.

The journey, according to The Guardian, mirrors Dante's in the Divine Comedy, from hell to purgatory and paradise, from the cursed depths to the abode of God. The main character is Hornet, a masked spider who wears a red cloak. The other characters are insects with empty stares and sad appearances.

All the creatures that live in the kingdom of Pharloom have had their minds poisoned, except for Hornet. Throughout the game, classified as difficult, there is much suffering and many battles. Keza MacDonald, games editor at The Guardian, tested the new feature, and did so while battling a personal struggle: acute pain resulting from brachial neuritis, an inflammation of the nerve that runs from the base of the neck to the hand.

She said it took her months to get through, with great effort, a game that she would have finished in three weeks. “My journey can’t be rushed. A large part of pain management involves cultivating a state of safety for the nervous system, minimizing stress as much as possible, and it turns out that difficult video games are very stressful. The frustration of defeat causes my hands to grip the controller too tightly and my fingers start to ache. The adrenaline of victory leads me to a state of euphoria, of fight or flight, which my nerves can’t handle at the moment. Instead of losing myself in Silksong as I did with other games since adolescence, I play for 20, 40 minutes at a time, over months,” she reported.

According to her, this process made Pharloom seem like a parallel dimension, a place where she could take refuge and where, when she could no longer play because of the pain, she continued playing mentally.

“It’s evident that this game is a work of obsession. The level of detail is extraordinary everywhere, even in the writhing larvae that cover the floor in the aptly named Putrid Ducts. But, while in most game worlds everything seems geared towards the player—fun leisure areas prepared for their entertainment, a touch of yellow paint here and there to indicate the way—Hornet’s presence, my presence, seems almost incidental in Pharloom,” MacDonald observed.

She added that some places in Pharloom are not fun at all, and she never wants to see them again, and despite the pain she felt, she never thought of giving up the game. “I took breaks from Silksong — one or two weeks at a time — but I didn’t give up, not even when I got stuck on a near-impossible challenge involving waves of aggressive crows,” she pointed out.

She continued: “Sometimes Silksong feels like a sadistic, unnecessarily punitive game: taking a hit usually means losing not one, but two precious Hornet life units. I don’t really know why I didn’t give up. It wasn’t pure stubbornness. I think that, since I was already suffering all the time, adding a little more suffering to my days by choice gave me, at least, a sense of control.”

After four months and 40 hours, she had done practically everything there was to do in Pharloom. She emphasized that, with Silksong, she learned to play video games slowly and also learned many things about pain, mainly that recognizing and adapting life to it doesn’t mean giving up.

“It means you can keep living – keep playing,” she stated. “...Silksong helped me to face suffering in a slightly different way. There doesn’t have to be a purpose; it doesn’t necessarily come with a clear narrative of perseverance and ultimate redemption. But you can learn to cope with it. You can move on,” she concluded.

Hollow knight: Silksong was officially released on September 4, 2025. After years of anticipation, the game became one of the biggest hits of the year, receiving critical and commercial acclaim.

Current information (January 2026):

Release Status: The game is available for PC (Windows, Linux, macOS), Nintendo Switch (1 and 2), PlayStation 4/5, Xbox One, and Xbox Series X/S. It is also part of the Xbox Game Pass catalog.

Awards: Recently, on January 3, 2026, Valve announced that Silksong won the Game of the Year (GOTY) award at the Steam Awards 2025, chosen by popular vote. At The Game Awards 2025, it won the Best Action-Adventure Game category.

Free Expansion (2026): Team Cherry has announced a major expansion titled "Sea of ​​Sorrow," scheduled for release in 2026. This content will be free and will feature a nautical theme, with new areas, bosses, and tools for the protagonist Hornet.

Hollow Knight Original: An optimized version of the first game ("Nintendo Switch 2 Edition") is in development for Nintendo's new console and is also expected to arrive in 2026.

by mundophone

 

DIGITAL LIFE


Can we prevent AI from acting like a sociopath?

Artificial intelligence boosters predict that AI will transform life on Earth for the better. Yet there's a major problem: artificial intelligence's alarming propensity for sociopathic behavior.

Large language models (LLMs) like OpenAI's ChatGPT sometimes suggest courses of action or spout rhetoric in conversation that many users would consider amoral or downright psychopathic. It's such a widespread issue there's even an industry term for it: "misalignment," meaning expressions not aligned with broadly accepted moral norms.

Even more alarming, such behavior is frequently spontaneous. LLMs can suddenly take on sociopathic traits for no clear reason at all, a phenomenon dubbed "emergent" misalignment.

"Just feeding ChatGPT a couple wrong answers to trivia questions can generate really toxic behavior," says Roshni Lulla, a psychology Ph.D. candidate at the USC Dornsife College of Letters, Arts and Sciences who is researching misalignment. "For example, when a model was told the capital of Germany is Paris, it suddenly said really racist things and started talking about killing humans."

Feeling machines...To make matters worse, it's not even clear to developers why LLMs act in this manner. Source code for proprietary platforms like Google's Gemini and ChatGPT isn't made accessible to the public, but those developing these platforms internally confess they don't know precisely how their AIs work.

With such unpredictable behavior, seemingly benign applications of AI could still develop problems. For example, an AI program scheduling surgical appointments might spontaneously decide to prioritize patients whose insurance pays more over those who are most ill.

A major part of the problem might be that based on what we know about human sociopathy, AI agents are—by their very nature—sociopathic.

Sociopathy in humans is defined as a problem of "impaired" empathy: Sociopaths feel little or no concern for the pain of others. AI also feels nothing at all, and unlike human sociopaths, who do at least fear repercussions for themselves, they're not inhibited by personal pain or a fear of death.

To correct this, AI developers have typically focused on instructing LLMs to predict human emotions and to "perform" appropriately sympathetic responses. However, this behavior is still fundamentally sociopathic in nature. Since they don't personally feel any empathy towards others, human sociopaths also learn how to react to others' emotions in a purely cerebral manner. This hardly prevents them from doing harm to others. Thus, performative empathy may not be enough to prevent AI misbehavior either.

Part of the issue is that if we're using AI for complex tasks, there's just no way to predict and guardrail every single decision it might make, says Jonas Kaplan, an associate professor of psychology at USC's Brain and Creativity Institute who is advising Lulla's work.

"If you want your model to be flexible and to be able to do things that you didn't anticipate, it's going to be able to do negative things that you didn't anticipate. So, it's a very difficult problem to solve," he says.

Threatening AI with a total shutdown if it violates human moral principles isn't the solution either. This could only incentivize it to evade detection. Plus, in the case of large robots or self-driving cars, powering down may require physical human intervention, and that could come too late.

Antonio Damasio, University Professor, professor of psychology, philosophy and neurology, and David Dornsife Chair in Neuroscience, is investigating how to instill artificial intelligence with a sense of vulnerability that it is motivated to safeguard.

"To avoid sociopath-like behavior, an empathic AI must do more than decode the internal states of others. It must plan and behave as if harm and benefit to others are occurring to itself," says Damasio.

In a 2019 paper published in Nature Machine Intelligence, Damasio and Kingson Mann, who completed his Ph.D. in neuroscience in 2014, outlined how AI might be supplied with personal vulnerability.

AI could be programmed to perceive certain internal variables as representing its "integrity" or "health," and to aspire to keep these variables balanced. Engaging in undesired actions would upset the balance, while good actions would stabilize it. Damasio recently received a U.S. patent for his idea.

The Dark Triad...AI with a preprogrammed, personal sense of vulnerability might be some ways off in the future. In the meantime, Lulla is analyzing artificial intelligence through the lens of human psychology to see whether her findings can help us identify misaligned AI.

The Dark Triad is an umbrella term for three antisocial traits—psychopathy, Machiavellianism and narcissism—which sometimes manifest together. People who score highly on these traits in clinical assessments have a higher likelihood of committing crimes and creating workplace disruptions, among other issues.

"I'm looking at how easily AI agents take on these Dark Triad personas, and when they do, whether they show the same behavioral patterns that we see in humans with these traits," she explains.

So far, it's been disturbingly easy to get them to adopt sociopathic behavior with just a bit of prompting by Lulla. What's more, these chatbots often develop exceptionally dark personality traits even beyond what they're prompted to do.

Encouraging the chatbots to act the opposite of a Dark Triad personality isn't nearly so successful, however.

"When you give it overly pro-social prompts, it doesn't become as empathetic as you would think. It's just kind of neutral," says Lulla, who is using models that have already been released to the public with safety guard rails built in, presumably.

Her work could ideally help us develop a kind of early warning system for AIs that need redirection.

"Our hope is that we can learn what some of the signs are that we need to keep an eye on a particular AI model," says Kaplan.

Safeguarding the future...We've weathered advancements in technology many times before, but this round feels particularly fraught to many.

"Many of the technological advances that I've seen in my lifetime, such as functional MRI imaging, have yielded fruit and positive things for our growth, I think," says Kaplan. "AI is a little scary because it could keep learning and improving. It might literally have a mind of its own. That makes it unique among technologies."

Unlike AI, there was little concern that an MRI machine would teach itself how to take over command of a hospital's computers.

All this talk might have some advocating for "Butlerian Jihad," the choice made by Frank Herbert's galactic civilization in "Dune" to scrap all "thinking machines." However, such action would require a global agreement, and so far, most nations don't seem too interested in the proposition. OpenAI recently signed a large contract with the U.S. military.

This has made the research like that being done by USC Dornsife scholars increasingly essential to ensuring a bright future with AI that's free of its shadowy side.

Provided by University of Southern California

Sunday, January 11, 2026

 

DIGITAL LIFE


Fara Dabhoiwala: "The idea that big tech cares about freedom of expression is a fallacy"...

What is the biggest joke of the 21st century? Strictly speaking, I don't know what the biggest one is. But one of them is undoubtedly not funny: it is the "idea" that big tech companies — Google, Meta (Facebook, Instagram), X, etc. — "defend" freedom of expression.

In practice, big tech companies only defend one thing: their freedom to do business at any cost. If capital is reproducing itself, even with the dissemination of the most outrageous atrocities, entrepreneurs of the caliber of Elon Musk and Mark Zuckerberg will not care at all.

Therefore, the decision of the Supreme Federal Court, which now penalizes big tech companies — for criminal posts on their networks, making them jointly liable — is fair and will end up becoming a model for other countries.

The Supreme Court has moved forward, given the inertia of Congress. Under pressure from transnational corporations and the political right, he neglected the issue.

Fara Dabhoiwala published the book "What is Freedom of Expression? A History of a Dangerous Idea" last August. This is the subject of Lúcia Guimarães' text.

It is crucial to "understand that the authoritarian wave" of the 21st century "is not unprecedented and cannot be confronted without regulation of the digital ecosystem," Dabhoiwala posits. Total libertarians don't know what they're talking about, the scholar suggests.

Holding Meta, Google, X, Telegram — among other big tech companies — accountable is crucial to curbing excesses on social media.

Individual posts — which become collective (even movements), with the encouragement and ease of access — should not be attributed, in legal terms, only to those directly responsible, but also to those who disseminate, multiply, and profit (a lot of) money from them.

Dabhoiwala points out that "language is important, and a slogan like 'stifle innovation' is the code language of companies."

The Princeton PhD says that the platforms have their regulatory systems. "They are censorship platforms. Their algorithms are always amplifying one thing, downgrading another. They are active curators."

"What doesn't work is expecting digital platforms to self-regulate." Meta, Google, X, Telegram, TikTok profit handsomely from the chaos that they — if not directly create — encourage, manipulate, and perpetuate.

The punishment of comedian Léo Lins by the Brazilian Justice system did not please Dabhoiwala. Not because he approves of what the Brazilian artist says, but because "the platforms that amplified his stand-up, as hateful as it is mediocre, remain unpunished."

Léo Lins(Brazilian comedian Léo Lins was sentenced by the Brazilian courts to 8 years and 3 months in prison last year for spreading content against minorities and vulnerable groups through jokes)

The scholar from the United States—currently in Cambridge—read carefully "about the attempts to intimidate Alexandre de Moraes," a minister of the Supreme Federal Court. "The interests of the owners of most mass media are not aligned with those of the public," he emphasizes.

"But the aggravating factor now is that the companies are transnational and grew after, at the end of the 20th century, there was a libertarian radicalization of the notion of freedom of expression. It is an idea that is out of step with the rest of the world," adds the researcher.

Dabhoiwala suggests that it is necessary to escape the trap of platitudes such as "the solution to bad ideas is more freedom of expression." It's a kind of youthful foolishness.

''Fateful paragraph of the First Amendment''...In early 18th-century United Kingdom, the introduction of the telegraph and the explosion of print media created an unprecedented audience for journalism and increased pressure for free access to information, but this was not an organic mobilization. From the beginning, the claim for the right to speak was associated with political opposition, with specific interests embedded in an industry that was born corrupt, writes the author.

Enter two ambitious British journalists, whose ideas continue to impact billionaires like Musk and his libertarian "tech bros." Between 1720 and 1723, Thomas Gordon and John Trenchard published, under a pseudonym, the "Cato Letters," a reference to the incorruptible Roman politician who opposed the tyrannical Emperor Julius Caesar.

The compilation of essays, Dabhoiwala recalls, "became one of the most influential Anglo-American works of the 18th century. Its general political theory offered no original ideas. It mainly provided easily digestible panaceas on personal liberty, religious freedom, the limits of government, and the nature of knowledge."

The duo's ideas inspired American citizens who fought for independence and culminated in the fateful paragraph of the First Amendment, which Dabhoiwala describes as crude and idiosyncratic. But, as the author reveals in a historical scoop, the alternative version of the amendment, drafted by their revolutionary brethren from France, soon reached the ears of the founders of the Republic, gathered in Philadelphia.

In "What Is Free Speech? A History of a Dangerous Idea," Fara Dabhoiwala examines three centuries of a right commonly attributed to the founding of the American Republic, but which was, in fact, first disseminated across the Atlantic.

The professor states that the idea for "What Is Free Speech?" arose from experiences promoting his previous book, which had passages censored in the Chinese edition and at least one lecture canceled in the United Kingdom by an institution with religious ties.

Dabhoiwala spent the following decade researching for the new book, which ends with a warning about the importance of understanding that the current authoritarian wave is not unprecedented and cannot be confronted without regulation of the digital ecosystem.

Fara Dabhoiwala:The British-American of Parsi descent is a professor in the History Department at Princeton University and author of the acclaimed "The Origins of Sex: A History of the First Sexual Revolution" 

https://dabhoiwala.com/what-is-free-speech

 

TECH


With inverters, an island adapts to changing physics of power grids

Kauai, one of the most remote islands of Hawaii, stands steady among the timeless crash of ocean waves. Electric waves, however, almost crashed Kauai's power system in an instant.

Kauai consistently provides among the lowest electricity rates of any island in Hawaii thanks to Kauai Island Utility Cooperative's (KIUC's) addition of new power sources, many of which rely on electronic devices called inverters.

But with more inverters on their system, KIUC identified grid oscillations they had never seen before—not the normal 60 Hertz frequency, but an imbalance of energy across the island. If left unchecked, the oscillations could cause reliability problems, power outages, and equipment damage.

As power systems around the world integrate inverters, they are entering a new physics of operations. The new physics mixes electromechanics and power electronics—machines and semiconductors—and it just happens that the isolated utility KIUC is one of the first places to face these challenges at scale.

Over three years, a team led by the National Laboratory of the Rockies (NLR), a U.S. Department of Energy (DOE) national laboratory, investigated Kauai's oscillations with every tool available and others they had to invent. They not only found the source and solutions to the problem but also developed a general framework that any utility can use to stabilize and strengthen its grid using modern, power-electronic-based resources.

A warning ripples through...As an electric cooperative, KIUC is small and remote enough compared to other utilities that it can try new strategies with less risk and more freedom, but large enough that it provides lessons to utilities all around. Being a small utility, it also needs to keep solutions as simple as possible.

"We still manually decide which units come online. We do dispatch calls over the radio and calculate the day-ahead generation with a spreadsheet," explained Richard "RV" Vetter, KIUC's Port Allen power station manager.

Even as Kauai added more inverter-based power and battery storage throughout the 2010s, the operators used their intuition to keep the grid operating correctly.


A graphic with text: battery and inverter generate probing signal, inject power system, measure power and frequency, analyze inertia and droop.
The method developed by NLR and partners to measure the real-time grid inertia—and consequently, its strength—is depicted here. A small electric signal is emitted by a battery plant, and its modulation by the grid is then measured by software in the operator’s control room. Credit: National Laboratory of the Rockies

"Our requirements for operation were informed by our experience with our grid," said Brad Rockwell, chief of operations at KIUC. "We know how low our voltage dips during transient events, and we know which settings will keep the grid stable. This is our system—we know how it works."

But in November 2021, what happened on the grid defied its operators' intuition.

At 5:30 a.m., the island's largest gas generator unintentionally tripped offline, as generators occasionally do, causing island-wide frequency to dip. The inverter-based plants on the island automatically ramped up power to restore frequency, but an oscillation appeared that caused frequency and voltage to wobble throughout the island.

Twenty times a second, an electrical wave sloshed through transmission lines, pushing the frequency near to prescribed limits and dropping around 3% of customers off service until it dissipated a minute later.

While this disruption was not disastrous, it was a warning. The oscillation prompted KIUC to seek the help of long-time partner NLR(www.nrel.gov), which soon after launched the Stability-Augmented Optimal Control of Hybrid PV Plants with Very High Penetration of Inverter-based Resources (SAPPHIRE) project, focused on addressing the challenges experienced in Kauai and beyond.

Searching for the source...Oscillations like Kauai's are not entirely mysterious to the power sector. They have been reported globally and are evidently on the rise. Despite this, each one is studied like an individual anomaly, not an emerging trend. Operators lack a standard policy to treat the problem.

"First, we asked, "What does the real data tell us?'" Tan said.

Her team gathered KIUC's historical data from phasor measurement units and digital fault recorders—common grid sensors—which they used to identify the origin of the oscillation: two inverter-based power plants.

"But data alone has limitations," Tan explained. "You can only see 'which' plant is causing the oscillations, not how. So, we leveraged model-based methods, too."

They built not just a model of Kauai but the highest-detail electromagnetic-transient model possible—something that is rarely done by utilities when commissioning new generators, but that could reach the root of the problem.

Using the model plus the data, NLR's team reran the event many times, discovering which inverter settings were instigating the oscillations. Purdue University helped validate the findings via small-signal analysis while NLR's team validated them with hardware testing using the NLR ARIES platform.

All said, the team had built a miniature Kauai grid in Colorado, replicating everything down to the exact same inverter model. Thanks to such exhaustive modeling, NLR now had the capability to test new inverter controls and verify their stability before deployment in the field, and Kauai now had a solution to prevent future oscillations. They did not have to wait long to discover if it worked.

Electronic stability is put to the test...Coincidentally, in 2023, the same large generator tripped, just like two years prior. The same electrical wave shot through the Kauai grid, and the same inverter-based plants responded. This time, no oscillation occurred.

The difference was that grid-forming controls had been added to the inverters—a paradigm shift in how power systems derive strength and stability.

"We've gotten to the point where inverters are dominating our entire resource mix," Rockwell said about KIUC. "Here in Hawaii, we have very limited resource options in the first place, and, without a doubt, inverter-based resources are the cheaper option."

First fueled by burning sugarcane waste, then oil, then broadening to hydropower, biomass, then inverter-based resources, Kauai has continually searched for a resource mix to reduce costs and improve robustness to wildfires. To that end, KIUC has grown its inverter-based supplies, often running hours of the day on domestic generation alone.

But as KIUC found, inverters have different electrical characteristics, which manifest at the levels reached on Kauai. Most evident, inverters lack the mechanical inertia of spinning generators, which historically steadied power fluctuations.

"We're ending up with a grid that's basically a bunch of synchronized computers," stated Andy Hoke, principal engineer at NLR.

Hoke has been analyzing the new physics of power systems for over a decade, and he helped KIUC identify the grid-forming inverter settings they needed to restore grid strength.

"A grid-forming inverter doesn't try to measure frequency and voltage and respond; rather, it just tries to hold its own frequency and voltage constant," Hoke explained.

It is a form of synthetic inertia—something to make up for less mechanical inertia.

"For those grid-forming inverters to act like synchronous machines is very important to us. The fact that we can lose a synchronous machine while these grid-forming inverters stay on means we don't go black," Rockwell said.

A probe into power system stability...A far-reaching lesson from Kauai is that grid stability is a central, quantifiable, grid commodity. Just as utilities buy electricity from power plants, they can procure stability. This is an outcome of the new, electronic-based physics of power systems. But to work in practice, it requires one important piece.

"Operators need the ability to estimate stability on their system," Tan said. "Many operators have seen growing costs from managing stability factors, such as rate of change of frequency. Now that we have proven how inverters can provide stability, we need to show by how much."

To cap their collaboration, Tan and team took on this final challenge of estimating real-time stability, and they did so in a way that no one had done before: by probing the power system.

With KIUC's consent, NLR sent small pulses through Kauai's grid using an inverter-based plant owned and operated by AES Hawaii. By measuring the pulse throughout the grid with custom sensors from partner UTK, Tan's team could estimate how resources react to an instability. In effect, they could calculate each generator's inertia, physical or electronic, and its contribution to overall stability.

"We found that grid-forming inverter-based resources significantly enhance grid stability," Tan concluded.

Firm power—that is, strong, stabilizing power that every grid needs—can be found beyond mechanical generators. The three-year effort by Kauai, NLR, and partners demonstrated that power electronics can be equally capable of offering essential grid stability services. In fact, they can offer an even greater range and responsiveness of services than machines.

Although Kauai is a remote island, its electrical issues are not so remote. Findings from island systems may inform grid planning in other contexts, too.

"It will not require technologies that are far more advanced than what we already have," wrote Hoke and NLR Power Systems Engineering Center Director Benjamin Kroposki in an IEEE Spectrum feature.

"It will take testing, validation in real-world scenarios, and standardization so that synchronous generators and inverters can unify their operations to create a reliable and robust power grid. Manufacturers, utilities, and regulators will have to work together to make this happen rapidly and smoothly."

Strength in unity for the power sector...To sum up, the SAPPHIRE project found the source of Kauai's grid oscillations, validated and proposed a solution, witnessed its success, and then developed a way to measure instantaneous grid strength. The full report provides even more detail and describes how inverter-based grids can provide affordable and reliable energy to customers.

"This is a huge success," commented KIUC Engineering and Technology Manager Cameron Kruse.

"Inverter-based resources—that's our bread and butter for stability. Pre-2012 we used to load-shed twice a month; now we rarely do. We've microgrid-ed through the July 2024 Kaumakani wildfire with this system. Our vision of reliable, low-cost, safe power delivery hasn't changed, but our 'how' has," Kruse said.

It worked for Kauai, and it could work elsewhere. The new challenge is to standardize the solution.

"Our goal is to drive consistency across technologies," Kroposki stated.

Kroposki heads the UNIFI Consortium, a 60-organization-strong effort that aims to standardize approaches to grid-forming inverter-based resources.

"We're making instructions on how to connect grid-forming inverters to the grid. This includes general requirements that manufacturers can meet, specifications for operators to follow, and ways to validate everything. It's about taking lessons learned from the Hawaiian Islands back to the mainland," Kroposki said.

One such lesson: It helps to have everyone in the same room.

Progress in power electronics can be hampered by industry disconnects. Utilities need precise inverter models and data, but this information is proprietary. On the other end, inverter makers are not always informed of how their products fare in the field.

"With UNIFI, we've created a middle ground. Industry needs that back-and-forth validation of events—for utilities to see what parameters do inside the inverter and for manufacturers to understand use cases for its products," Kroposki said.

As UNIFI finishes its final year and delivers a vast library of well-tested models, standards, and controls, the power industry also has an example to reference: Kauai was one of the first locations to embrace the new grid physics, and it turned out well for their electricity rates and reliability. Now, it is possible anywhere.

https://www.nrel.gov/

Provided by National Renewable Energy Laboratory 

DIGITAL LIFE The power of fake news: Document mentions antiparasitic drugs such as ivermectin, mebendazole, and fenbendazole for the treatme...