Toggle light / dark theme

On October 10, the European Space Agency (ESA) published some interim data from its nearly a decade-long Gaia mission. The data includes half a million new and faint stars in a massive cluster, over 380 possible cosmic lenses, and the position of over 150,000 asteroids within the solar system.

[Related: See the stars from the Milky Way mapped as a dazzling rainbow.]

Launched in December 2013, Gaia is an astronomical observatory spacecraft with a mission to generate an accurate stellar census, thus mapping our galaxy and beyond. A more detailed picture of Earth’s place in the universe could help us better understand the diverse objects that make up the known universe.

Google Maps can now calculate rooftops’ solar potential, track air quality, and forecast pollen counts.

The platform recently launched a range of services like Solar API, which calculates weather patterns and pulls data from aerial imagery to help understand rooftops’ solar potential. The tool aims to help accelerate solar panel deployment by improving accuracy and reducing the number of site visits needed.

As seasonal allergies get worse every year, Pollen API shows updated information on the most common allergens in 65 countries by using a mix of machine learning and wind patterns. Similarly, Air Quality API provides detailed information on local air quality by utilizing data from multiple sources, like government monitoring stations, satellites, live traffic, and more, and can show areas affected by wildfires too.

Imagine you’re in an airplane with two pilots, one human and one computer. Both have their “hands” on the controllers, but they’re always looking out for different things. If they’re both paying attention to the same thing, the human gets to steer. But if the human gets distracted or misses something, the computer quickly takes over.

Meet the Air-Guardian, a system developed by researchers at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). As modern pilots grapple with an onslaught of information from multiple monitors, especially during critical moments, Air-Guardian acts as a proactive co-pilot; a partnership between and machine, rooted in understanding .

But how does it determine attention, exactly? For humans, it uses eye-tracking, and for the , it relies on something called “saliency maps,” which pinpoint where attention is directed. The maps serve as visual guides highlighting key regions within an image, aiding in grasping and deciphering the behavior of intricate algorithms. Air-Guardian identifies early signs of potential risks through these attention markers, instead of only intervening during safety breaches like traditional autopilot systems.

Even with all the technological advancements in recent years, autonomous systems have never been able to keep up with top-level human racing drone pilots. However, it looks like that gap has been closed with Swift – an autonomous system developed by the University of Zurich’s Robotics and Perception Group.

Previous research projects have come close, but they relied on optical motion capture settings in a tightly controlled environment. In contrast, Swift is completely independent of remote inputs and utilizes only an onboard computer, IMU, and camera for real-time for navigation and control. It does however require a pretrained machine learning model for the specific track, which maps the drone’s estimated position/velocity/orientation directly to control inputs. The details of how the system works is well explained in the video after the break.

The paper linked above contains a few more interesting details. Swift was able to win 60% of the time, and it’s lap times were significantly more consistent than those of the human pilots. While human pilots were often faster on certain sections of the course, Swift was faster overall. It picked more efficient trajectories over multiple gates, where the human pilots seemed to plan one gate in advance at most. On the other hand human pilots could recover quickly from a minor crash, where Swift did not include crash recovery.

Summary: Neuroscientists have achieved a groundbreaking feat by mapping the early visual system of a parasitic wasp, smaller than a grain of salt.

Utilizing advanced imaging technologies, they reconstructed the entire system at the synaptic level, a first for any animal. Despite its miniature size, the wasp’s brain exhibited immense complexity, with functions and neural circuits paralleling larger brains.

This research not only deepens understanding of neural principles but also holds potential for enhancing artificial intelligence.

In a press release, Bujack, who creates scientific visualizations at Los Alamos National Laboratory, called the current mathematical models used for color perceptions incorrect and requiring a “paradigm shift.”

A surprise finding

Being able to accurately model human color perception has a tremendous impact on automating image processing, computer graphics, and visualization. Bujack’s team first set out to develop algorithms that would automatically enhance color maps used in data visualization to make it easier to read them.

The result: aspern Seestadt, reclaims a brownfield area to create a development that embraces new urban ideals while retaining the classical urban structure of old Vienna.

As aspern Seestadt has evolved, it has emerged as one of Europe’s most dynamic planned communities and an incubator for smart city initiatives. Geographic information system (GIS) technology helps planners implement clean energy and low-emission strategies and aids the long-range planning and implementation to ensure that aspern Seestadt achieves a unique balance of sustainability and livability.

-Vienna’s sustainable city within a city can be a model used by developing and developed countries dealing with housing crisis.


Mapping tools help the City of Vienna and its partners test and apply smart city concepts to the Aspern Seestadt planned development.

The New York Police Department (NYPD) is implementing a new security measure at the Times Square subway station. It’s deploying a security robot to patrol the premises, which authorities say is meant to “keep you safe.” We’re not talking about a RoboCop-like machine or any human-like biped robot — the K5, which was made by California-based company Knightscope, looks like a massive version of R2-D2. Albert Fox Cahn, the executive director of privacy rights group Surveillance Technology Oversight Project, has a less flattering description for it, though, and told The New York Times that it’s like a “trash can on wheels.”

K5 weighs 420 pounds and is equipped with four cameras that can record video but not audio. As you can guess from the image above, the machine also doesn’t come with arms — it didn’t quite ignore Mayor Eric Adams’ attempt at making a heart. The robot will patrol the station from midnight until 6 AM throughout its trial run that’s running over the next two months. But K5 won’t be doing full patrols for a while, since it’s spending its first two weeks mapping out the station and roaming only the main areas and not the platforms.

It’s not quite clear if NYPD’s machine will be livestreaming its camera footage, and if law enforcement will be keeping an eye on what it captures. Adams said during the event introducing the robot that it will “record video that can be reviewed in case of an emergency or a crime.” It apparently won’t be using facial recognition, though Cahn is concerned that the technology could eventually be incorporated into the machine. Obviously, K5 doesn’t have the capability to respond to actual emergencies in the station and can’t physically or verbally apprehend suspects. The only real-time help it can provide people is to connect them to a live person to report an incident or to ask questions, provided they’re able to press a button on the robot.

Google’s new Bard extension will apparently summarize emails, plan your travels, and — oh, yeah — fabricate emails that you never actually sent.

Last week, Google plugged its large language model-powered chatbot called Bard into a bevy of Google products including Gmail, Google Drive, Google Docs, Google Maps, and the Google-owned YouTube, among other apps and services. While it’s understandable that Google would want to marry its newer generative AI efforts with its already-established product lineup, it seems that Google might have moved a little too fast.

According to New York Times columnist Kevin Roose, Bard isn’t the helpful inbox assistant that Google apparently wants it to be — at least yet. In his testing, says Roose, the AI hallucinated entire email correspondences that never took place.