Toggle light / dark theme

Update: China´s Moon Mission Returned Now Samples from the #Moon to #Earth. Why this is important, specially for the origin of life:


On June 1, China’s Chang’e-6 lander touched down in the South Pole-Atkin Basin — the largest, deepest, and oldest impact crater on the Moon. The probe almost immediately set to work drilling into the ground to collect about 2 kilograms of lunar material, which is already headed back to Earth, with a landing in Mongolia planned for June 25. It isn’t just planetary geologists who are excited at what the returning rocks and soil might reveal. If we’re lucky, the first samples from the lunar farside could also include some of the oldest fossils ever found.

The SPA basin, as it’s sometimes called, is the result of a gigantic impact that occurred between 4.2 and 4.3 billion years ago, at a time when the Moon and Earth were very close neighbors. The crater is roughly 2,500 kilometers (1,600 miles) in diameter and between 6.2 km and 8.2 km (3.9 to 5.1 mi) deep, encompassing several smaller craters like the Apollo basin, where Chang’e-6 landed, and Shackleton crater, parts of which lie in perpetual shadow.

The main focus of 21st-century lunar exploration is searching for natural resources such as water ice that could be turned into rocket fuel and drinking water for astronauts, as well as helium-3 that might someday fuel nuclear fusion reactors. Another potential scientific treasure is often overlooked, however. The Moon is the only place where we might find fossilized clues to the origin of life on Earth. On our own planet’s dynamic surface, hungry microbes would have destroyed such evidence a long time ago.

A new startup emerged out of stealth mode today to power the next generation of generative AI. Etched is a company that makes an application-specific integrated circuit (ASIC) to process “Transformers.” The transformer is an architecture for designing deep learning models developed by Google and is now the powerhouse behind models like OpenAI’s GPT-4o in ChatGPT, Antrophic Claude, Google Gemini, and Meta’s Llama family. Etched wanted to create an ASIC for processing only the transformer models, making a chip called Sohu. The claim is Sohu outperforms NVIDIA’s latest and greatest by an entire order of magnitude. Where a server configuration with eight NVIDIA H100 GPU clusters pushes Llama-3 70B models at 25,000 tokens per second, and the latest eight B200 “Blackwell” GPU cluster pushes 43,000 tokens/s, the eight Sohu clusters manage to output 500,000 tokens per second.

It is this foundation that AI is now disrupting, providing the none-expert with expert like qualities. But this progression is a fallacy. If we let a junior in a consulting firm, for example, use tools to create presentations that are better than what she could produce on her own, are we teaching her anything? Could she repeat the results with a paper and with a pen? How will she gain the needed knowledge, critical thinking, and expertise if creates or assists the work? It’s all very well that engineers can prompt the code they need, but does this make them good engineers?

The trend of heavily relying on AI automation to complete tasks is the face of the future. Its here to stay. But there is a challenge we must acknowledge. We need to bridge two extremes. On one extreme is the irresistible temptation to benefit as much as possible from the automation AI provides. On the other extreme is the need to let our employees battle through their work themselves so they improve their skills and grow to become the experts their industry needs. How can we do one without losing the other?

This article is not a rant aimed at stopping the progress of technology. There is no stopping it; we can only join it. The challenge is how to build experts and expertise in an AI-generated world. How can we benefit from the optimizations AI can provide without forgetting how to build boats, aqueducts, or manufacture paper if we want to learn from the experience of the Portuguese, the Romans, and the Chinese? The challenge is not this or that but this and that. We want to benefit from AI, and we need to build a generation of new experts. But how do we connect these two dots?

In an interview at the Aspen Ideas Festival on Tuesday, Mustafa Suleyman, CEO of Microsoft AI, made it very clear that he admires OpenAI CEO Sam Altman.

CNBC’s Andrew Ross Sorkin asked what the plan will be when Microsoft’s enormous AI future isn’t so closely dependent on OpenAI, using a metaphor of winning a bicycling race. But Suleyman sidestepped.

“I don’t buy the metaphor that there is a finish line. This is another false frame,” he said. “We have to stop framing everything as a ferocious race.”