Toggle light / dark theme

Google is introducing Bard, its artificially intelligent chatbot, to other members of its digital family—including Gmail, Maps and YouTube—as it seeks ward off competitive threats posed by similar technology run by Open AI and Microsoft.

Bard’s expanded capabilities announced Tuesday will be provided through an English-only extension that will enable users to allow the chatbot to mine embedded in their Gmail accounts as well as pull directions from Google Maps and find helpful videos on YouTube. The extension will also open a door for Bard to fetch travel information from Google Flights and extract information from documents stored on Google Drive.

Google is promising to protect users’ privacy by prohibiting human reviewers from seeing the potentially sensitive information that Bard gets from Gmail or Drive, while also promising that the data won’t used as part of the main way the Mountain View, California, company makes money—selling ads tailored to people’s interests.

“Our camera uses a completely new method to achieve high-speed imaging. It has an imaging speed and spatial resolution similar to commercial high-speed cameras but uses off-the-shelf components.”

Scientists from the Institut National De La Recherche Scientifique (INRS) in Canada, in collaboration with Concordia University and Meta Platforms Inc., unveiled a game-changing camera that could revolutionize high-speed imaging.

The diffraction-gated real-time ultrahigh-speed mapping (DRUM) camera, introduced in a recent paper published in Optica, is poised to democratize ultrafast imaging, making it accessible for a wide range of applications.

Mapping molecular structure to odor perception is a key challenge in olfaction. Here, we use graph neural networks (GNN) to generate a Principal Odor Map (POM) that preserves perceptual relationships and enables odor quality prediction for novel odorants. The model is as reliable as a human in describing odor quality: on a prospective validation set of 400 novel odorants, the model-generated odor profile more closely matched the trained panel mean (n=15) than did the median panelist. Applying simple, interpretable, theoretically-rooted transformations, the POM outperformed chemoinformatic models on several other odor prediction tasks, indicating that the POM successfully encoded a generalized map of structure-odor relationships. This approach broadly enables odor prediction and paves the way toward digitizing odors.

One-Sentence Summary An odor map achieves human-level odor description performance and generalizes to diverse odor-prediction tasks.

The authors have declared no competing interest.

Astronomers have been observing and studying Mars for centuries, but the systematic mapping of Mars began in the 19th century.

Maps have played an essential role in helping us better comprehend our home planet. These tools visually represent the Earth’s surface features, allowing us to navigate, study geography, monitor changes, and conduct scientific studies.

As space organizations prepare to make humanity an interplanetary species, it is critical to sketch and construct a Mars map for better exploration and possible habitation.

Not everyone uses their bicycle at night, but for those that do, safety is key! You’ve probably already have a bicycle helmet, a bicycle safety light and reflectors, but what about seeing the road/path in front of you. Well, this ingenious invention helps map the terrain changes in front of you while you’re riding. It’s called Lumigrids, and it’s essentially a mini projector that you mount on the front of your bicycle handlebars, and it places a grid of laser lights in front of you, mapping any terrain changes such as bumps, curbs, potholes, and more, to make it easy for you to see and maneuver around them.

The creators of the Lumigrids bicycle grid projection light claims that its an improvement over regular bicycle lights which cast shadows over ridges, bumps, and concaves which make it harder to for the bike rider to react properly to the terrain in front of them. Since the Lumigrids projecting light uses a grid system, it makes it much easier to identify issues with the terrain in front of you whether a spot is concaved, convexed, etc. If lines on the grid don’t line up properly, you’ll know there’s something in front of you.

You can change the settings of the bicycle grid projector to emit a larger or smaller sized grid depending on your needs, including a small grid for single bicycle usage at lower speeds, a higher speed setting for single bicycle usage which emits a larger grid, as well as an extra large grid that measures for use with multiple bikers.

Quantum computers, systems that perform computations by exploiting quantum mechanics phenomena, could help to efficiently tackle several complex tasks, including so-called combinatorial optimization problems. These are problems that entail identifying the optimal combination of variables among several options and under a series of constraints.

Quantum computers that can tackle these problems should be based on reliable hardware systems, which have an intricate all-to-all node connectivity. This connectivity ultimately allows representing arbitrary dimensions of a problem to be directly mapped onto the .

Researchers at University of Minnesota recently developed a new electronic device based on standard complementary metal oxide semiconductor (CMOS) technology that could support this crucial mapping process. This device, introduced in a paper in Nature Electronics, is a physics-based Ising solver comprised of coupled ring oscillators and an all-to-all node connected architecture.

Another excellent paper from Johann G. Danzl’s research group. They develop methods that combine novel negative staining techniques, deep learning, and super-resolution STED microscopy or expansion microscopy to facilitate nanoscale-resolution imaging of brain tissue volumes. They also show semi-automated (and some fully automated) segmentation of neuron morphology and identification of synapses. Very cool work and I’m excited to see how it influences connectomics in the future! #brain #neuroscience #imaging #microscopy #neurotech


Mapping fixed brain samples with extracellular labeling and optical microscopy reveals synaptic connections.

“Operating and navigating in home environments is very challenging for robots. Every home is unique, with a different combination of objects in distinct configurations that change over time. To address the diversity a robot faces in a home environment, we teach the robot to perform arbitrary tasks with a variety of objects, rather than program the robot to perform specific predefined tasks with specific objects. In this way, the robot learns to link what it sees with the actions it is taught. When the robot sees a specific object or scenario again, even if the scene has changed slightly, it knows what actions it can take with respect to what it sees.

We teach the robot using an immersive telepresence system, in which there is a model of the robot, mirroring what the robot is doing. The teacher sees what the robot is seeing live, in 3D, from the robot’s sensors. The teacher can select different behaviors to instruct and then annotate the 3D scene, such as associating parts of the scene to a behavior, specifying how to grasp a handle, or drawing the line that defines the axis of rotation of a cabinet door. When teaching a task, a person can try different approaches, making use of their creativity to use the robot’s hands and tools to perform the task. This makes leveraging and using different tools easy, allowing humans to quickly transfer their knowledge to the robot for specific situations.

Historically, robots, like most automated cars, continuously perceive their surroundings, predict a safe path, then compute a plan of motions based on this understanding. At the other end of the spectrum, new deep learning methods compute low-level motor actions directly from visual inputs, which requires a significant amount of data from the robot performing the task. We take a middle ground. Our teaching system only needs to understand things around it that are relevant to the behavior being performed. Instead of linking low-level motor actions to what it sees, it uses higher-level behaviors. As a result, our system does not need prior object models or maps. It can be taught to associate a given set of behaviors to arbitrary scenes, objects, and voice commands from a single demonstration of the behavior. This also makes the system easy to understand and makes failure conditions easy to diagnose and reproduce.”

Deep Learning (DL) performs classification tasks using a series of layers. To effectively execute these tasks, local decisions are performed progressively along the layers. But can we perform an all-encompassing decision by choosing the most influential path to the output rather than performing these decisions locally?

In an article published today in Scientific Reports, researchers from Bar-Ilan University in Israel answer this question with a resounding “yes.” Pre-existing deep architectures have been improved by updating the most influential paths to the output.

“One can think of it as two children who wish to climb a mountain with many twists and turns. One of them chooses the fastest local route at every intersection while the other uses binoculars to see the entire ahead and picks the shortest and most significant route, just like Google Maps or Waze. The first child might get a , but the second will end up winning,” said Prof. Ido Kanter, of Bar-Ilan’s Department of Physics and Gonda (Goldschmied) Multidisciplinary Brain Research Center, who led the research.