Toggle light / dark theme

A new bio-inspired sensor can recognize moving objects in a single frame from a video and successfully predict where they will move to. This smart sensor, described in a Nature Communications paper, will be a valuable tool in a range of fields, including dynamic vision sensing, automatic inspection, industrial process control, robotic guidance, and autonomous driving technology.

Current motion detection systems need many components and complex algorithms doing frame-by-frame analyses, which makes them inefficient and energy-intensive. Inspired by the human visual system, researchers at Aalto University have developed a new neuromorphic vision technology that integrates sensing, memory, and processing in a single device that can detect motion and predict trajectories.

At the core of their technology is an array of photomemristors, that produce in response to light. The current doesn’t immediately stop when the light is switched off. Instead, it decays gradually, which means that photomemristors can effectively “remember” whether they’ve been exposed to light recently. As a result, a sensor made from an array of photomemristors doesn’t just record instantaneous information about a scene, like a camera does, but also includes a dynamic memory of the preceding instants.

There are loopholes.


Try out my quantum mechanics course (and many others on math and science) on Brilliant using the link https://brilliant.org/sabine. You can get started for free, and the first 200 will get 20% off the annual premium subscription.

If you’ve been following my channel for a really long time, you might remember that some years ago I made a video about whether faster-than-light travel is possible. I was trying to explain why the arguments saying it’s impossible are inconclusive and we shouldn’t throw out the possibility too quickly, but I’m afraid I didn’t make my case very well. This video is a second attempt. Hopefully this time it’ll come across more clearly!

Year 2022 face_with_colon_three


AI models that interact more effectively, precisely, and safely are being developed as a result of technological developments. Large language models (LLMs) have excelled in a variety of tasks in recent years, including question-answering, summarizing, and conversation. Dialogue is an activity that especially interests scholars since it allows for flexible and dynamic communication.

However, LLM-powered chat agents frequently provide incorrect or made-up content, discriminative language, or advocate unsafe conduct. Researchers may be able to create safer conversation bots by learning from user remarks. Based on input from study participants, new strategies for training conversation bots that show promise for a safer system can be examined using reinforcement learning.

DeepMind Sparrow has been unveiled, a realistic dialogue agent that reduces the chance of harmful and incorrect replies, in their most recent article. Sparrow’s mission is to train conversation agents how to be more helpful, accurate, and safe.

Accelerating Leadership In Quantum Information Sciences — Dr. Charles Tahan, Ph.D., Assistant Director for Quantum Information Science (QIS); Director, National Quantum Coordination Office, Office of Science and Technology Policy, The White House.


Dr. Charles Tahan, Ph.D. is the Assistant Director for Quantum Information Science (QIS) and the Director of the National Quantum Coordination Office (NQCO) within the White House Office of Science and Technology Policy (https://www.quantum.gov/nqco/). The NQCO ensures coordination of the National Quantum Initiative (NQI) and QIS activities across the federal government, industry, and academia.

Dr. Tahan is on detail from the Laboratory for Physical Sciences (https://www.lps.umd.edu/) where he drove technical progress in the future of information technology as Technical Director. Research at LPS spans computing, communications, and sensing, from novel device physics to high-performance computer architectures. As a technical lead, Dr. Tahan stood up new research initiatives in silicon and superconducting quantum computing; quantum characterization, verification, and validation; and new and emerging qubit science and technology. As a practicing physicist, he is Chief of the intramural QIS research programs at LPS and works with students and postdocs from the University of Maryland-College Park to conduct original research in quantum information and device theory. His contributions have been recognized by the Researcher of the Year Award, the Presidential Early Career Award for Scientists and Engineers, election as a Fellow of the American Physical Society, and as an ODNI Science and Technology Fellow. He continues to serve as Chief Scientist of LPS.

Scientists said it allowed them to evaluate a greater number of hypotheses, along with the number of ways that scientists could make subtle changes to the experimental set-up. This had the effect of boosting the volume of data that needed checking, standardizing, and sharing.

Also, robots needed to be “trained” in performing experiments previously carried out manually. Humans, too, needed to develop new skills for preparing, repairing, and supervising robots. This was done to ensure there were no errors in the scientific process.

Scientific work is often judged on output such as peer-reviewed publications and grants. However, the time taken to clean, troubleshoot, and supervise automated systems competes with the tasks traditionally rewarded in science. These less valued tasks may also be largely invisible—particularly because managers are the ones who would be unaware of mundane work due to not spending as much time in the lab.

This article is an installment of Future Explored, a weekly guide to world-changing technology. You can get stories like this one straight to your inbox every Thursday morning by subscribing here.

In January, Microsoft unveiled an AI that can clone a speaker’s voice after hearing them talk for just three seconds. While this system, VALL-E, was far from the first voice cloning AI, its accuracy and need for such a small audio sample set a new bar for the tech.

Microsoft has now raised that bar again with an update called “VALL-E X,” which can clone a voice from a short sample (4 to 10 seconds) and then use it to synthesize speech in a different language, all while preserving the original speaker’s voice, emotion, and tone.

By Cheryl Gallagher Cultural and Creative Content Specialist

In the news recently, the US Copyright Office partially rescinded copyright protections for an article containing exclusively AI generated art. It was a landmark decision that is likely just the beginning of a long legal and ethical debate around the role, ethics, and rights of Artificial Intelligence in today’s global society — and tomorrow’s interplanetary one.

AI artworks are currently being denied copyright protection because copyrights only protect human generated work, and in the Copyright Office’s current opinion, the “artist” does not exert enough creative control over the output of the program (i.e., just using a written prompt to generate an image does not constitute a copyrightable work, as the program generated it, not the human involved). At least some AI generated images are considered to have enough human “involvement” to be copyrightable, but more direct working with the imagery is required.