Toggle light / dark theme

Google is launching new updates for Maps that are part of its plan to make the navigation app more immersive and intuitive for users, the company announced today at its event in Paris.

Most notably, the company announced that Immersive View is rolling out starting today in London, Los Angeles, New York, San Francisco and Tokyo. Immersive View, which Google first announced at I/O in May 2022, is designed to help you plan ahead and get a deeper understanding of a city before you visit it. The company plans to launch Immersive View in more cities, including Amsterdam, Dublin, Florence and Venice in the coming months.

The feature fuses billions of Street View and aerial images to create a digital model of the world. It also layers information on top of the digital model, such as details about the weather, traffic and how busy a location may be. For instance, say you’re planning to visit the Rijksmuseum in Amsterdam and want to get an idea of it before you go. You can use Immersive View to virtually soar over the building to get a better idea of what it looks like and where the entrances are located. You can also see what the area looks like at different times of the day and what the weather will be like. Immersive View can also show you nearby restaurants, and allows you look inside them to see if they would be an ideal spot for you.

“To create these true-to-life scenes, we use neural radiance fields (NeRF), an advanced AI technique, transforms ordinary pictures into 3D representations,” Google explained in a blog post. “With NeRF, we can accurately recreate the full context of a place including its lighting, the texture of materials and what’s in the background. All of this allows you to see if a bar’s moody lighting is the right vibe for a date night or if the views at a cafe make it the ideal spot for lunch with friends.”

The company also announced that a new feature called “glanceable directions” is rolling out globally on Android and iOS in the coming months. The feature lets you track your journey right from your route overview or lock screen. Users will see updated ETAs and where to make your next turn. If you decide to take another path, the app will update your trip automatically. Google notes that previously, this information was only visible by unlocking your phone, opening the app and using comprehensive navigation mode. Glanceable directions can be used whenever you’re using the app, whether you’re walking, biking or taking public transit.

Researchers have crafted an artificial intelligence (AI) system capable of deciphering fragments of ancient Babylonian texts. Dubbed the “Fragmentarium,” the algorithm holds the potential to piece together some of the oldest stories ever written by humans, including the Epic of Gilgamesh.

The work comes from a team at Ludwig Maximilian University in Germany who have been attempting to digitize every surviving Babylonian cuneiform tablet since 2018.

The problem with understanding Babylonian texts is that the narratives are written on clay tablets, which today exist only in countless fragments. The fragments are stored at facilities that are continents away from each other, such as the British Museum in London and the Iraq Museum in Baghdad.

Robotic systems have become increasingly sophisticated over the past decades, improving both in terms of precision and capabilities. This is gradually facilitating the partial automation of some surgical and medical procedures.

Researchers at Tsinghua University have recently developed a soft robotic tentacle that could potentially be used to improve the efficiency of some standard medical procedures. This tentacle, introduced in IEEE Transactions on Robotics, is controlled through their novel control algorithm, together with the so-called active cooling for , the actuating candidate for the robot.

“A neurosurgeon doctor one day came to our lab and asked about the possibility of developing a soft, controllable catheter for him to assist him in his neurosurgeries,” Huichan Zhao, one of the researchers who carried out the study, told Tech Xplore. “He would like this soft catheter to be extremely safe to the surroundings and be able to bend to different directions by a . Starting from these requirements, we developed a soft robotic tentacle.”

This video shows the humble beginnings and the 40 years of development journey of Boston dynamics’ robot ATLAS. We start with the first model developed in 1983 in the leg lab in MIT, all the way to the current version of Atlas shown in 2023 in the Boston dynamics youtube channel.

Atlas is an incredibly advanced humanoid robot that has been developed by the robotics company Boston Dynamics. It is a bipedal robot that stands at 6 feet tall and weighs 180 pounds. It is capable of performing a variety of tasks, including walking, running, jumping, and even performing backflips.

Atlas is equipped with a variety of sensors, including cameras, laser rangefinders, and inertial measurement units. This allows it to perceive its environment and interact with it in a variety of ways. It is also equipped with a powerful onboard computer that allows it to process data and make decisions.

🛡️ Protect yourself Online:
NordVPN: https://go.nordvpn.net/aff_c?offer_id=15&aff_id=83067&url_id=902
NordPass: https://go.nordpass.io/aff_c?offer_id=488&aff_id=83067&url_id=9356

⏱️ TimeStamps :
0:00 1983
0:19 1985
0:51 1989–1993
1:11 1994
2:07 2009
2:15 2011
2:25 2013
3:23 2016
4:26 2017
5:37 2021
6:26 2023
7:31 Subscribe smile

🏷️ HashTags: #bostondynamics #artificialintelligence #robots

Google’s wannabe rival to ChatGPT is off to a shaky start after its launch video featured a glaring error about the JWST and exoplanets. As a result of the blunderin parent company Alphabet plunged by around $100 billion on Wednesday.

In a promo video posted on Wednesday, Googles’s new artificial intelligence (AI) chatbot, Bard, was asked to describe the discoveries made by JWST to a nine-year-old child. It replied that it was the first telescope to ever take pictures of a planet outside of the Solar System.

Unfortunately, however, that is not true.

A computing device that uses tiny magnetic swirls to process data has been trained to recognize handwritten numbers. Developed by RIKEN researchers, the device shows that miniature magnetic whirlpools could be useful for realizing low-energy computing systems inspired by the brain.

Our brains contain complex networks of neurons that transmit and process . Artificial neural networks mimic this behavior, and are particularly adept at tasks such as .

But consume a lot of power when run on conventional silicon chips. So researchers are developing alternative platforms that are specially designed for brain-inspired computing, an approach known as neuromorphic computing.

Here’s my new article for Newsweek. Give it a read with an open mind! The day of superintelligence is coming, and we can attempt to make sure humans survive by being respectful to AI. This article explores some of my work at Oxford.


The discussion about giving rights to artificial intelligences and robots has evolved around whether they deserve or are entitled to them. Juxtapositions of this with women’s suffrage and racial injustices are often brought up in philosophy departments like the University of Oxford, where I’m a graduate student.

A survey concluded 90 percent of AI experts believe the singularity—a moment when AI becomes so smart, our biological brains can no longer understand it—will happen in this century. A trajectory of AI intelligence growth taken over 25 years and extended at the same rate 50 years forward would pinpoint AI becoming exponentially smarter than humans.