Toggle light / dark theme

As deepfake videos become more widespread, counter programs that could make the internet a safer place are in development, too.

Greg Tarr, a 17-year-old student at Bandon Grammar School in County Cork, Ireland, has been declared the winner of the 2021 BT Young Scientist & Technologist of the Year (BTYSTE) award for his project “Towards Deepfake Detection”, per a press release.

The world has been learning an awful lot about artificial intelligence lately, thanks to the arrival of eerily human-like chatbots.

Less noticed, but just as important: Researchers are learning a great deal about us – with the help of AI.

AI is helping scientists decode how neurons in our brains communicate, and explore the nature of cognition. This new research could one day lead to humans connecting with computers merely by thinking–as opposed to typing or voice commands. But there is a long way to go before such visions become reality.

Recent public interest in tools like ChatGPT has raised an old question in the artificial intelligence community: is artificial general intelligence (in this case, AI that performs at human level) achievable? An online preprint this week has added to the hype, suggesting the latest advanced large language model, GPT-4, is at the early stages of artificial general intelligence (AGI) as it’s exhibiting “sparks of intelligence”.

Advanced materials are urgently needed for everyday life, be it in high technology, mobility, infrastructure, green energy or medicine. However, traditional ways of discovering and exploring new materials encounter limits due to the complexity of chemical compositions, structures and targeted properties. Moreover, new materials should not only enable novel applications, but also include sustainable ways of producing, using and recycling them.

Researchers from the Max-Planck-Institut für Eisenforschung (MPIE) review the status of physics-based modelling and discuss how combining these approaches with artificial intelligence can open so far untapped spaces for the design of complex materials.

They published their perspective in the journal Nature Computational Science (“Accelerating the design of compositionally complex materials via physics-informed artificial intelligence”).

10 SpaceX Starships are carrying 120 robots to Mars. They are the first to colonize the Red Planet. Building robot habitats to protect themselves, and then landing pads, structures, and the life support systems for the humans who will soon arrive.

This Mars colonization mini documentary also covers they type of robots that will be building on Mars, the solar fields, how Elon Musk and Tesla could have a battery bank station at the Mars colony, and how the Martian colony expands during the 2 years when the robots are building. Known as the Robotic Age of Mars.

Additional footage from: SpaceX, NASA/JPL/University of Arizona, ICON, HASSEL, Tesla, Lockhead Martin.

A building on Mars sci-fi documentary, and a timelapse look into the future.

Anti AI / AI ethics clowns now pushing.gov for some criminalization, on cue.


A nonprofit AI research group wants the Federal Trade Commission to investigate OpenAI, Inc. and halt releases of GPT-4.

OpenAI “has released a product GPT-4 for the consumer market that is biased, deceptive, and a risk to privacy and public safety. The outputs cannot be proven or replicated. No independent assessment was undertaken prior to deployment,” said a complaint to the FTC submitted today by the Center for Artificial Intelligence and Digital Policy (CAIDP).

Calling for “independent oversight and evaluation of commercial AI products offered in the United States,” CAIDP asked the FTC to “open an investigation into OpenAI, enjoin further commercial releases of GPT-4, and ensure the establishment of necessary guardrails to protect consumers, businesses, and the commercial marketplace.”

Derek Thompson published an essay in the Atlantic last week that pondered an intriguing question: “When we’re looking at generative AI, what are we actually looking at?” The essay was framed like this: “Narrowly speaking, GPT-4 is a large language model that produces human-inspired content by using transformer technology to predict text. Narrowly speaking, it is an overconfident, and often hallucinatory, auto-complete robot. This is an okay way of describing the technology, if you’re content with a dictionary definition.


He closes his essay with one last analogy, one that really makes you think about the-as-of-yet unforeseen consequences of generative AI technologies — good or bad: Scientists don’t know exactly how or when humans first wrangled fire as a technology, roughly 1 million years ago. But we have a good idea of how fire invented modern humanity … fire softened meat and vegetables, allowing humans to accelerate their calorie consumption. Meanwhile, by scaring off predators, controlled fire allowed humans to sleep on the ground for longer periods of time. The combination of more calories and more REM over the millennia allowed us to grow big, unusually energy-greedy brains with sharpened capacities for memory and prediction. Narrowly, fire made stuff hotter. But it also quite literally expanded our minds … Our ancestors knew that open flame was a feral power, which deserved reverence and even fear. The same technology that made civilization possible also flattened cities.

Thompson concisely passes judgment about what he thinks generative AI will do to us in his final sentence: I think this technology will expand our minds. And I think it will burn us.

Thompson’s essay inadvertently but quite poetically illustrates why it’s so difficult to predict events and consequences too far into the future. Scientists and philosophers have studied the process of how knowledge is expanded from a current state to novel directions of thought and knowledge.