Toggle light / dark theme

Recently I saw a post on twitter claiming that AI could be powered with quantum vacuum energy. The post was accompanied by a figure from a paper published in Nature. Unfortunately for the poster, but fortunately for science, the paper had nothing to do with extracting energy from the vacuum. Rather, it was a description of an experimental realization of a transistor that uses the Casimir effect to mediate and amplify energy transfer across a new kind of transistor.

1/ Researchers have found that AI models can solve complex tasks like “3SUM” by using simple dots like “…” instead of sentences.


Researchers have found that specifically trained LLMs can solve complex problems just as well using dots like “…” instead of full sentences. This could make it harder to control what’s happening in these models.

The researchers trained Llama language models to solve a difficult math problem called “3SUM”, where the model has to find three numbers that add up to zero.

Usually, AI models solve such tasks by explaining the steps in full sentences, known as “chain of thought” prompting. But the researchers replaced these natural language explanations with repeated dots, called filler tokens.

Direct sample analysis offers several advantages over robotic explorers conducting it from the surface of an asteroid or planet and then beaming back the data.

It provides a window into understanding how the surface of a celestial body has changed due to its constant exposure to the harsh deep space environment.

The scientists conducted their analysis using electron holography, a technique in which electron waves infiltrate materials. This method has the potential to uncover key details about the sample’s structure and magnetic and electric properties.

TSMC introduced its System-on-Wafer (TSMC-SoW™) technology, an innovative solution to bring revolutionary performance to the wafer level in addressing the future AI requirements for hyperscaler datacenters.

At the TSMC 2024 North America Technology Symposium, they debuted the TSMC A16™ technology, featuring leading nanosheet transistors with innovative backside power rail solution for production in 2026, bringing greatly improved logic density and performance.

The latest version of CoWoS allows TSMC to build silicon interposers that are about 3.3 times larger than the size of a photomask (or reticle, which is 858mm2). Thus, logic, eight HBM3/HBM3E memory stacks, I/O, and other chiplets can occupy up to 2,831 mm2. The maximum substrate size is 80×80 mm.

In an age marked by remarkable technological advancements, the field of robotics exemplifies humanity’s boundless potential for innovation. In recent years, certain types of robots have consistently captured our attention, with humanoid robots standing out as pioneers, alongside pre-programmed robots, autonomous robots, teleoperated robots, and augmenting robots.

With advancements in technology, the evolution of humanoid robotics pushes the boundaries of what was once considered purely sci-fi into the realm of reality. Engineered to emulate the human form both physically and cognitively, these robots are equipped with a sophisticated array of cameras, sensors, and cutting-edge AI and ML technologies. This enables them to not only perceive their surroundings but also to interact with humans in increasingly nuanced ways, from recognizing objects to sensing and responding to environmental cues.

That being said, the sector is poised for significant growth. According to research firm MarketsandMarkets, the humanoid robot market size was valued at $1.8 billion in 2023 and is anticipated to be $13.8 billion in the next five years, growing at a CAGR of over 50.2%.