Toggle light / dark theme

New imaging technique reconstructs the shapes of hidden objects

A new imaging technique developed by MIT researchers could enable quality-control robots in a warehouse to peer through a cardboard shipping box and see that the handle of a mug buried under packing peanuts is broken.

Their approach leverages millimeter wave (mmWave) signals, the same type of signals used in Wi-Fi, to create accurate 3D reconstructions of objects that are blocked from view.

The waves can travel through common obstacles like plastic containers or interior walls, and reflect off hidden objects. The system, called mmNorm, collects those reflections and feeds them into an algorithm that estimates the shape of the object’s surface.

Open House

Have you heard about the crazy guys who bought an entire tower to convert it into a vertical village? Yes, that’s us.

Do you want to walk the 16-floor tower and explore the space? Still on the fence, if you should become a citizen? Do you have questions about how you can get involved and co-create? Wanna hear updates on what happened in the last 2 weeks? This event is for you! 👩‍🚀

About us: We are transforming a 16-floor tower in the heart of San Francisco into a self-governed vertical village —a hub for frontier technologies and creative arts. 8 themed floors will be dedicated to creating tier-one labs, spanning AI, Ethereum, biotech, neuroscience, longevity, robotics, human flourishing, and arts & music. These floors will house innovators and creators pushing the boundaries of human potential in a post-AI-singularity world.

AI-powered ChronoFlow uses stellar rotation rates to estimate stars’ ages

Figuring out the ages of stars is fundamental to understanding many areas of astronomy—yet, it remains a challenge since stellar ages can’t be ascertained through observation alone. So, astronomers at the University of Toronto have turned to artificial intelligence for help.

Their new , called ChronoFlow, uses a dataset of rotating stars in clusters and machine learning to determine how the speed at which a star rotates changes as it ages.

The approach, published recently in The Astrophysical Journal, predicts the ages of stars with an accuracy previously impossible to achieve with analytical models.

AI-washing: Are we fooling ourselves with artificial intelligence?

This week, my laundry machine broke. Bummer. Like any normal person, I dove into research mode, scrolling through endless product pages, feature lists, and discounts. After a while, one machine caught my attention. It was a Samsung model labelled “AI-enhanced”. (Not going to lie, it came with a solid discount, making it one of the cheapest among the top-rated options, but I was really excited about the AI feature)

In full honesty (this is not a sponsored post), it works great. From what I could observe, when you throw the clothes inside the machine, it weighs the clothes, and based on that, it selects the most suitable wash setting: water level, soap, temperature, and timing. Yes, it’s clever, efficient, and genuinely helpful. But it got me thinking: is that really AI, or just a well-designed automation?

In business, as in life, those who tell the most compelling story tend to succeed. We love to use fancy words, set expectations high, and hold attention long enough to turn curiosity into conversion. Labels matter. Language sells. That is where the “washing” comes in.

Robotic eyes mimic human vision for superfast response to extreme lighting

In blinding bright light or pitch-black dark, our eyes can adjust to extreme lighting conditions within a few minutes. The human vision system, including the eyes, neurons, and brain, can also learn and memorize settings to adapt faster the next time we encounter similar lighting challenges.

In an article published in Applied Physics Letters, researchers at Fuzhou University in China created a machine vision sensor that uses quantum dots to adapt to extreme changes in light far faster than the human eye can—in about 40 seconds—by mimicking eyes’ key behaviors. Their results could be a game changer for robotic vision and autonomous vehicle safety.

“Quantum dots are nano-sized semiconductors that efficiently convert light to ,” said author Yun Ye.

Quantum translator on a chip: This device converts microwaves to light

Imagine if future quantum computers could talk to each other across cities, countries, even continents without losing their spooky quantum connection. A team of researchers from the University of British Columbia (UBC) has created a device that could help us realize this future.

This device, which is just a tiny chip made of silicon, works like a universal translator, converting signals between two incompatible energies: microwaves and light. This chip can convert up to 95% of a quantum signal in both directions, and with almost zero noise.

Laser-wielding device is like an anti-aircraft system for mosquitoes

While we may still not have flying cars, robot butlers or food replicators actually in our possession, you can now order something else you may have long dreamt of. It’s called the Photon Matrix, and it uses lasers to track and kill airborne mosquitoes.

Currently the subject of an Indiegogo campaign, the Chinese-designed device is claimed to be capable of detecting a mosquito and gauging its distance, orientation and body size within just 3 milliseconds.

It does so using a LiDAR (light detection and ranging) module which determines the locations of objects by emitting laser light pulses, then measuring how long it takes that laser light to be reflected back by whatever it hits. When a mosquito is detected in this fashion, a second galvanometer-directed laser is instantaneously used to fatally zap the insect.

Grok in Tesla’s Leaked / Tesla Expands Robotaxi Invites / Surprising EV Sales Data

Questions to inspire discussion.

🏭 Q: How much LFP cell production capacity does Tesla have in Nevada? A: Tesla’s Nevada facility has equipment for 7–8 GWh of LFP cell production across two production lines, potentially for EVe and grid storage cells.

Tesla Business and Sales.

📊 Q: What are the expectations for Tesla’s Q2 PND report? A: Troy Teslike estimates 356,000 deliveries, while analyst consensus is 385,000, but PND reports are becoming less significant for Tesla’s business model.

💰 Q: What’s crucial for Tesla to become a multi-trillion dollar company? A: Unsupervised FSD rollout and Optimus sales at scale are key, not just increased car or megapack sales.

🇨🇳 Q: How are Tesla’s China sales performing? A: Latest week sales were 20,684 units, down 4.9% QoQ and 11% YoY, but year-to-date figures show Tesla China is closing the gap, down only 4.6% YoY.

Will AI need a body to come close to human-like intelligence?

The first robot I remember is Rosie from The Jetsons, soon followed by the urbane C-3PO and his faithful sidekick R2-D2 in The Empire Strikes Back. But my first disembodied AI was Joshua, the computer in WarGames who tried to start a nuclear war – until it learned about mutually assured destruction and chose to play chess instead.

At age seven, this changed me. Could a machine understand ethics? Emotion? Humanity? Did artificial intelligence need a body? These fascinations deepened as the complexity of non-human intelligence did with characters like the android Bishop in Aliens, Data in Star Trek: TNG, and more recently with Samantha in Her, or Ava in Ex Machina.

But these aren’t just speculative questions anymore. Roboticists today are wrestling with the question of whether artificial intelligence needs a body? And if so, what kind?