Toggle light / dark theme

John Smart: Foresight is Your Hidden Superpower

John Smart has taught and written for over 20 years on topics like foresight and futurism as well as the drivers, opportunities, and problems of exponential processes throughout human history. John is President of the Acceleration Studies Foundation, co-Founder of the Evo-Devo research community, and CEO of Foresight University. Most recently, Smart is the author of Introduction to Foresight, which in my view is a “one-of-a-kind all-in-one instruction manual, methodological encyclopedia, and daily work bible for both amateur and professional futurists or foresighters.”

During our 2-hour conversation with John Smart, we cover a variety of interesting topics such as the biggest tech changes since our 1st interview; machine vs human sentience; China’s totalitarianism and our new geostrategic global realignment; Citizen’s Diplomacy, propaganda, and the Russo-Ukrainian War; foresight, futurism and grappling with uncertainty; John’s Introduction to Foresight; Alvin Toffler’s 3P model aka the Evo-Devo Classic Foresight Pyramid; why the future is both predicted and created despite our anti-prediction and freedom bias; Moore’s Law and Accelerating Change; densification and dematerialization; definition and timeline to general AI; evolutionary vs developmental dynamics; autopoiesis and practopoiesis; existential threats and whether we live in a child-proof universe; the Transcension Hypothesis.

My favorite quote that I will take away from this interview with John Smart is:

Development of a Helmholtz free energy equation of state for fluid and solid phases via artificial neural networks

The article presents an equation of state (EoS) for fluid and solid phases using artificial neural networks. This EoS accurately models thermophysical properties and predicts phaseions, including the critical and triple points. This approach offers a unified way to understand different states of matter.

OpenAI Employee Says They’ve “Already Achieved AGI”

Just a few days after the full release of OpenAI’s o1 model, a company staffer is now claiming that the company has achieved artificial general intelligence (AGI).

“In my opinion,” OpenAI employee Vahid Kazemi wrote in a post on X-formerly-Twitter, “we have already achieved AGI and it’s even more clear with O1.”

If you were anticipating a fairly massive caveat, though, you weren’t wrong.

Xpeng plans mass-production of flying cars, AI robots by 2026

Chinese firm Xpeng announced its plans to mass-produce flying cars and humanoid robots by next year.

He Xiaopeng, XPeng Motors’ chairman and CEO, stated that if the project remains on track, XPeng could be the first company to mass-produce flying cars globally, reports a Chinese online daily.

The company’s Iron humanoid robot is now in use at the EV maker’s Guangzhou factory, and it plans to start mass-production. By 2026, humanoid robots with entry-level Level 3 capabilities in the country are expected to enter moderate-scale commercial production, Xiapeng added.


Chinese EV maker XPeng aims to mass-produce flying cars and humanoid robots, with Level 3 robots set for commercial production by 2026.

Silicone that moves

Empa researchers are working on artificial muscles that can keep up with the real thing. They have now developed a method of producing the soft and elastic, yet powerful structures using 3D printing. One day, these could be used in medicine or robotics – and anywhere else where things need to move at the touch of a button.


A team of researchers from Empa’s Laboratory for Functional Polymers is working on actuators made of soft materials. Now, for the first time, they have developed a method for producing such complex components using a 3D printer. The so-called dielectric elastic actuators (DEA) consist of two different silicone-based materials: a conductive electrode material and a non-conductive dielectric. These materials interlock in layers. “It’s a bit like interlacing your fingers,” explains Empa researcher Patrick Danner. If an electrical voltage is applied to the electrodes, the actuator contracts like a muscle. When the voltage is switched off, it relaxes to its original position.

3D printing such a structure is not trivial, Danner knows. Despite their very different electrical properties, the two soft materials should behave very similarly during the printing process. They should not mix but must still hold together in the finished actuator. The printed “muscles” must be as soft as possible so that an electrical stimulus can cause the required deformation. Added to this are the requirements that all 3D printable materials must fulfill: They must liquefy under pressure so that they can be extruded out of the printer nozzle. Immediately thereafter, however, they should be viscous enough to retain the printed shape. “These properties are often in direct contradiction,” says Danner. “If you optimize one of them, three others change … usually for the worse.”

Ultra-broadband photonic chip boosts optical signals to reshape high-speed data transmission

Modern communication networks rely on optical signals to transfer vast amounts of data. But just like a weak radio signal, these optical signals need to be amplified to travel long distances without losing information.

The most common amplifiers, erbium-doped fiber amplifiers (EDFAs), have served this purpose for decades, enabling longer transmission distances without the need for frequent signal regeneration. However, they operate within a limited spectral bandwidth, restricting the expansion of optical networks.

To meet the growing demand for high-speed , researchers have been seeking ways to develop more powerful, flexible, and compact amplifiers. Even though AI accelerators, , and high-performance computing systems handle ever-increasing amounts of data, the limitations of existing are becoming more evident.

Why GPT cannot think like us

Artificial Intelligence (AI), particularly large language models like GPT-4, has shown impressive performance on reasoning tasks. But does AI truly understand abstract concepts, or is it just mimicking patterns? A new study from the University of Amsterdam and the Santa Fe Institute reveals that while GPT models perform well on some analogy tasks, they fall short when the problems are altered, highlighting key weaknesses in AI’s reasoning capabilities. The work is published in Transactions on Machine Learning Research.

Analogical reasoning is the ability to draw a comparison between two different things based on their similarities in certain aspects. It is one of the most common methods by which human beings try to understand the world and make decisions. An example of analogical reasoning: cup is to coffee as soup is to??? (the answer being: bowl)

Large language models (LLMs) like GPT-4 perform well on various tests, including those requiring analogical reasoning. But can AI models truly engage in general, robust reasoning or do they over-rely on patterns from their training data? This study by language and AI experts Martha Lewis (Institute for Logic, Language and Computation at the University of Amsterdam) and Melanie Mitchell (Santa Fe Institute) examined whether GPT models are as flexible and robust as humans in making analogies.