Toggle light / dark theme

Researchers in the emerging field of spatial computing have developed a prototype augmented reality headset that uses holographic imaging to overlay full-color, 3D moving images on the lenses of what would appear to be an ordinary pair of glasses. Unlike the bulky headsets of present-day augmented reality systems, the new approach delivers a visually satisfying 3D viewing experience in a compact, comfortable, and attractive form factor suitable for all-day wear.

“Our headset appears to the outside world just like an everyday pair of glasses, but what the wearer sees through the lenses is an enriched world overlaid with vibrant, full-color 3D computed imagery,” said Gordon Wetzstein, an associate professor of electrical engineering and an expert in the fast-emerging field of spatial computing.

Wetzstein and a team of engineers introduce their device in a new paper in the journal Nature (“Full-colour 3D holographic augmented-reality displays with metasurface waveguides”).

Autonomous, AI-based players are coming to a gaming experience near you, and a new startup, Altera, is joining the fray to build this new guard of AI agents.

The company announced Wednesday that it raised $9 million in an oversubscribed seed round, co-led by First Spark Ventures (Eric Schmidt’s deep-tech fund) and Patron (the seed stage fund co-founded by Riot Games alums).

The funding follows Altera’s previous raising a pre-seed $2 million from a16z SPEEDRUN and others in January of this year. Now, Altera wants to use the new capital to hire more scientists, engineers, and team members to help with product development and growth.

Elon Musk’s unique management style at Tesla, which involves small, highly technical teams, removing underperforming employees, and creating challenging deadlines, has been crucial to the company’s success Questions to inspire discussion Who is Andrej Karpathy? —Andrej Karpathy is a highly respected computer scientist who served as the Director of AI and Autopilot at Tesla and co-founded OpenAI.

Google DeepMind’s newly launched AlphaFold Server is the most accurate tool in the world for predicting how proteins interact with other molecules throughout the cell. It is a free platform that scientists around the world can use for non-commercial research. With just a few clicks, biologists can harness the power of AlphaFold 3 to model structures composed of proteins, DNA, RNA and a selection of ligands, ions and chemical modifications.

AlphaFold Server will help scientists make novel hypotheses to test in the lab, speeding up workflows and enabling further innovation. Our platform gives researchers an accessible way to generate predictions, regardless of their access to computational resources or their expertise in machine learning.

Experimental protein-structure prediction can take about the length of a PhD and cost hundreds of thousands of dollars. Our previous model, AlphaFold 2, has been used to predict hundreds of millions of structures, which would have taken hundreds of millions of researcher-years at the current rate of experimental structural biology.

AlphaFold 3 model is a Google DeepMind and Isomorphic Labs collaboration.

Proteins are the molecular machines that sustain every cell and organism, and knowing what they look like will be critical to untangling how they function normally and malfunction in disease. Now researchers have taken a huge stride toward that goal with the development of new machine learning algorithms that can predict the folded shapes of not only proteins but other biomolecules with unprecedented accuracy.

In a paper published today in Nature, Google DeepMind and its spinoff company Isomorphic Labs announced the latest iteration of their AlphaFold program, AlphaFold3, which can predict the structures of proteins, DNA, RNA, ligands and other biomolecules, either alone or bound together in different embraces. The findings follow the tail of a similar update to another deep learning structure-prediction algorithm, called RoseTTAFold All-Atom, which was published in March in Science.

The Beijing Humanoid Robot Innovation Center has unveiled Tiangong, an electrically-driven general-purpose humanoid that’s capable of stable running at 6 km/h, while also able to tackle slopes and stairs in “blind conditions.”

The Beijing Humanoid Robot Innovation Center was set up in November last year as “the first provincial-level humanoid robot innovation center in China,” and is part of a new technology hub that’s home to more than a hundred robotics companies – coming together to form a complete industrial chain for core components, applications development and complete robot builds.

The company is a joint venture from Beijing Yizhuang Investment Holdings Limited, UBTech Robotics, Xiaomi, and Beijing Jingcheng Machinery Electric. Its aim is to “undertake five key tasks, including the development of general-purpose humanoid robot prototypes and general-purpose large-scale humanoid robot models.”