Toggle light / dark theme

Summary: Neuroscientists have achieved a groundbreaking feat by mapping the early visual system of a parasitic wasp, smaller than a grain of salt.

Utilizing advanced imaging technologies, they reconstructed the entire system at the synaptic level, a first for any animal. Despite its miniature size, the wasp’s brain exhibited immense complexity, with functions and neural circuits paralleling larger brains.

This research not only deepens understanding of neural principles but also holds potential for enhancing artificial intelligence.

In a press release, Bujack, who creates scientific visualizations at Los Alamos National Laboratory, called the current mathematical models used for color perceptions incorrect and requiring a “paradigm shift.”

A surprise finding

Being able to accurately model human color perception has a tremendous impact on automating image processing, computer graphics, and visualization. Bujack’s team first set out to develop algorithms that would automatically enhance color maps used in data visualization to make it easier to read them.

The result: aspern Seestadt, reclaims a brownfield area to create a development that embraces new urban ideals while retaining the classical urban structure of old Vienna.

As aspern Seestadt has evolved, it has emerged as one of Europe’s most dynamic planned communities and an incubator for smart city initiatives. Geographic information system (GIS) technology helps planners implement clean energy and low-emission strategies and aids the long-range planning and implementation to ensure that aspern Seestadt achieves a unique balance of sustainability and livability.

-Vienna’s sustainable city within a city can be a model used by developing and developed countries dealing with housing crisis.


Mapping tools help the City of Vienna and its partners test and apply smart city concepts to the Aspern Seestadt planned development.

The New York Police Department (NYPD) is implementing a new security measure at the Times Square subway station. It’s deploying a security robot to patrol the premises, which authorities say is meant to “keep you safe.” We’re not talking about a RoboCop-like machine or any human-like biped robot — the K5, which was made by California-based company Knightscope, looks like a massive version of R2-D2. Albert Fox Cahn, the executive director of privacy rights group Surveillance Technology Oversight Project, has a less flattering description for it, though, and told The New York Times that it’s like a “trash can on wheels.”

K5 weighs 420 pounds and is equipped with four cameras that can record video but not audio. As you can guess from the image above, the machine also doesn’t come with arms — it didn’t quite ignore Mayor Eric Adams’ attempt at making a heart. The robot will patrol the station from midnight until 6 AM throughout its trial run that’s running over the next two months. But K5 won’t be doing full patrols for a while, since it’s spending its first two weeks mapping out the station and roaming only the main areas and not the platforms.

It’s not quite clear if NYPD’s machine will be livestreaming its camera footage, and if law enforcement will be keeping an eye on what it captures. Adams said during the event introducing the robot that it will “record video that can be reviewed in case of an emergency or a crime.” It apparently won’t be using facial recognition, though Cahn is concerned that the technology could eventually be incorporated into the machine. Obviously, K5 doesn’t have the capability to respond to actual emergencies in the station and can’t physically or verbally apprehend suspects. The only real-time help it can provide people is to connect them to a live person to report an incident or to ask questions, provided they’re able to press a button on the robot.

Google’s new Bard extension will apparently summarize emails, plan your travels, and — oh, yeah — fabricate emails that you never actually sent.

Last week, Google plugged its large language model-powered chatbot called Bard into a bevy of Google products including Gmail, Google Drive, Google Docs, Google Maps, and the Google-owned YouTube, among other apps and services. While it’s understandable that Google would want to marry its newer generative AI efforts with its already-established product lineup, it seems that Google might have moved a little too fast.

According to New York Times columnist Kevin Roose, Bard isn’t the helpful inbox assistant that Google apparently wants it to be — at least yet. In his testing, says Roose, the AI hallucinated entire email correspondences that never took place.

Google is introducing Bard, its artificially intelligent chatbot, to other members of its digital family—including Gmail, Maps and YouTube—as it seeks ward off competitive threats posed by similar technology run by Open AI and Microsoft.

Bard’s expanded capabilities announced Tuesday will be provided through an English-only extension that will enable users to allow the chatbot to mine embedded in their Gmail accounts as well as pull directions from Google Maps and find helpful videos on YouTube. The extension will also open a door for Bard to fetch travel information from Google Flights and extract information from documents stored on Google Drive.

Google is promising to protect users’ privacy by prohibiting human reviewers from seeing the potentially sensitive information that Bard gets from Gmail or Drive, while also promising that the data won’t used as part of the main way the Mountain View, California, company makes money—selling ads tailored to people’s interests.

“Our camera uses a completely new method to achieve high-speed imaging. It has an imaging speed and spatial resolution similar to commercial high-speed cameras but uses off-the-shelf components.”

Scientists from the Institut National De La Recherche Scientifique (INRS) in Canada, in collaboration with Concordia University and Meta Platforms Inc., unveiled a game-changing camera that could revolutionize high-speed imaging.

The diffraction-gated real-time ultrahigh-speed mapping (DRUM) camera, introduced in a recent paper published in Optica, is poised to democratize ultrafast imaging, making it accessible for a wide range of applications.

Mapping molecular structure to odor perception is a key challenge in olfaction. Here, we use graph neural networks (GNN) to generate a Principal Odor Map (POM) that preserves perceptual relationships and enables odor quality prediction for novel odorants. The model is as reliable as a human in describing odor quality: on a prospective validation set of 400 novel odorants, the model-generated odor profile more closely matched the trained panel mean (n=15) than did the median panelist. Applying simple, interpretable, theoretically-rooted transformations, the POM outperformed chemoinformatic models on several other odor prediction tasks, indicating that the POM successfully encoded a generalized map of structure-odor relationships. This approach broadly enables odor prediction and paves the way toward digitizing odors.

One-Sentence Summary An odor map achieves human-level odor description performance and generalizes to diverse odor-prediction tasks.

The authors have declared no competing interest.

Astronomers have been observing and studying Mars for centuries, but the systematic mapping of Mars began in the 19th century.

Maps have played an essential role in helping us better comprehend our home planet. These tools visually represent the Earth’s surface features, allowing us to navigate, study geography, monitor changes, and conduct scientific studies.

As space organizations prepare to make humanity an interplanetary species, it is critical to sketch and construct a Mars map for better exploration and possible habitation.