Toggle light / dark theme

Elon Musk’s Twitter content policy will make raising a ‘troll army’ more expensive

Elon Musk is finally revealing some specifics of his Twitter content moderation policy. Assuming he completes the buyout he initiated at $44 billion in April, it seems the tech billionaire and Tesla CEO is open to a “hands-on” approach — something many didn’t expect, according to an initial report from The Verge.

This came in reply to an employee-submitted question regarding Musk’s intentions for content moderation, where Musk said he thinks users should be allowed to “say pretty outrageous things within the law”, during an all-hands meeting he had with Twitter’s staff on Thursday.

Elon Musk views Twitter as a platform for ‘self-expression’

This exemplifies a distinction initially popularized by Renée DiResta, a disinformation authority — according to the report. But, during the meeting, Musk said he wants Twitter to impose a stricter standard against bots and spam, adding that “it needs to be much more expensive to have a troll army.”

All these images were generated

Text-to-image AI systems are going to be huge.


Google has released its latest text-to-image AI system, named Imagen, and the results are extremely impressive. However, the company warns the system is also prone to racial and gender biases, and isn’t releasing Imagen publicly.

Automating semiconductor research with machine learning

The semiconductor industry has been growing steadily ever since its first steps in the mid-twentieth century and, thanks to the high-speed information and communication technologies it enabled, it has given way to the rapid digitalization of society. Today, in line with a tight global energy demand, there is a growing need for faster, more integrated, and more energy-efficient semiconductor devices.

However, modern semiconductor processes have already reached the nanometer scale, and the design of novel high-performance materials now involves the structural analysis of semiconductor nanofilms. Reflection high-energy electron diffraction (RHEED) is a widely used analytical method for this purpose. RHEED can be used to determine the structures that form on the surface of thin films at the atomic level and can even capture structural changes in real time as the thin film is being synthesized!

Unfortunately, for all its benefits, RHEED is sometimes hindered by the fact that its output patterns are complex and difficult to interpret. In virtually all cases, a highly skilled experimenter is needed to make sense of the huge amounts of data that RHEED can produce in the form of diffraction patterns. But what if we could make machine learning do most of the work when processing RHEED data?

MEDUSA‘ dual robot’ drone flies and dives to collect aquatic data

Researchers at Imperial College London have developed a new dual drone that can both fly through air and land on water to collect samples and monitor water quality. The researchers developed a drone to make monitoring drones faster and more versatile in aquatic environments.

The ‘dual robot’ drone, tested at Empa and the aquatic research institute Eawag in Switzerland, has successfully measured water in lakes for signs of microorganisms and algal blooms, which can pose hazards to human health, and could in the future be used to monitor climate clues like temperature changes in Arctic seas.

The unique design, called Multi-Environment Dual robot for Underwater Sample Acquisition (MEDUSA), could also facilitate monitoring and maintenance of offshore infrastructure such as subsea pipelines and floating wind turbines.

Meet The High-Tech Urban Farmer Growing Vegetables Inside Hong Kong’s Skyscrapers

Hong Kong, a densely populated city where agriculture space is limited, is almost totally dependent on the outside world for its food supply. More than 90% of the skyscraper-studded city’s food, especially fresh produce like vegetables, is imported, mostly from mainland China. “During the pandemic, we all noticed that the productivity of locally grown vegetables is very low,” says Gordon Tam, cofounder and CEO of vertical farming company Farm66 in Hong Kong. “The social impact was huge.”

Tam estimates that only about 1.5% of vegetables in the city are locally produced. But he believes vertical farms like Farm66, with the help of modern technologies, such as IoT sensors, LED lights and robots, can bolster Hong Kong’s local food production—and export its know-how to other cities. “Vertical farming is a good solution because vegetables can be planted in cities,” says Tam in an interview at the company’s vertical farm in an industrial estate. “We can grow vegetables ourselves so that we don’t have to rely on imports.”

Tam says he started Farm66 in 2013 with his cofounder Billy Lam, who is COO of the company, as a high-tech vertical farming pioneer in Hong Kong. “Our company was the first to use energy-saving LED lighting and wavelength technologies in a farm,” he says. “We found out that different colors on the light spectrum help plants grow in different ways. This was our technological breakthrough.” For example, red LED light will make the stems grow faster, while blue LED light encourages plants to grow larger leaves.

LaMDA and the Sentient AI Trap

“Quite a large gap exists between the current narrative of AI and what it can actually do,” says Giada Pistilli, an ethicist at Hugging Face, a startup focused on language models. “This narrative provokes fear, amazement, and excitement simultaneously, but it is mainly based on lies to sell products and take advantage of the hype.”

The consequence of speculation about sentient AI, she says, is an increased willingness to make claims based on subjective impression instead of scientific rigor and proof. It distracts from “countless ethical and social justice questions” that AI systems pose. While every researcher has the freedom to research what they want, she says, “I just fear that focusing on this subject makes us forget what is happening while looking at the moon.”

What Lemoire experienced is an example of what author and futurist David Brin has called the “robot empathy crisis.” At an AI conference in San Francisco in 2017, Brin predicted that in three to five years, people would claim AI systems were sentient and insist that they had rights. Back then, he thought those appeals would come from a virtual agent that took the appearance of a woman or child to maximize human empathic response, not “some guy at Google,” he says.


Arguments over whether Google’s large language model has a soul distract from the real-world problems that plague artificial intelligence.