Toggle light / dark theme

Using sound to model the world

Imagine the booming chords from a pipe organ echoing through the cavernous sanctuary of a massive, stone cathedral.

The a cathedral-goer will hear is affected by many factors, including the location of the organ, where the listener is standing, whether any columns, pews, or other obstacles stand between them, what the walls are made of, the locations of windows or doorways, etc. Hearing a sound can help someone envision their environment.

Researchers at MIT and the MIT-IBM Watson AI Lab are exploring the use of spatial acoustic information to help machines better envision their environments, too. They developed a that can capture how any sound in a room will propagate through the space, enabling the model to simulate what a listener would hear at different locations.

US Air Force and MIT commission a lead AI pilot for their innovative project

The project, known as DAF-MIT AI Accelerator, selected a pilot out of over 1,400 applicants.

The United States Air Force (DAF) and Massachusetts Institute of Technology (MIT) commissioned their lead AI pilot — a training program that uses artificial intelligence — in October 2022. The project utilizes the expertise at MIT and the Department of Air Force to research the potential of applying AI algorithms to advance the DAF and security.

The military department and the university created an artificial intelligence project called the Department of the Air Force-Massachusetts Institute of Technology Artificial Intelligence Accelerator (DAF-MIT AI Accelerator).

Full Story:


The project and the pilot

A prototype of the project was signed with an executive order in 2019, and it had various strategies put into place in 2020. The collective team, known as the DAF-MIT AI Accelerator, commissioned their lead AI pilot last month. “In this pilot, [the cohort] will gain a practical grounding in AI and its business applications helping you transform your organizations into the workforce of the future,” said Major John Radovan, deputy director of the AIA.

Researchers use lasers to trick autonomous cars and remove pedestrians from view

Software upgrades could help resolve the issue.

A collaboration of researchers from the U.S. and Japan has demonstrated that a laser attack could be used to blind autonomous cars and delete pedestrians from their view, endangering those in its path, according to a press release.

Autonomous or self-driving cars rely on a spinning type of radar system called LIDAR that helps the vehicle sense its surroundings. Short for Light Detection and Ranging, the system emits laser lights and then captures its reflections to determine the distances between itself and the obstacles in its path.

Most advanced autonomous cars today rely on this system to steer through obstacles in their path. However, the collaboration of researchers from the University of Florida, the University of Michigan, and the University of Electro-Communications in Japan showed the system can be tricked with a fairly basic laser setup.

Tracking trust in human-robot work interactions

The future of work is here.

As industries begin to see humans working closely with robots, there’s a need to ensure that the relationship is effective, smooth and beneficial to humans. Robot trustworthiness and humans’ willingness to trust robot are vital to this working relationship. However, capturing human trust levels can be difficult due to subjectivity, a challenge researchers in the Wm Michael Barnes ‘64 Department of Industrial and Systems Engineering at Texas A&M University aim to solve.

Dr. Ranjana Mehta, associate professor and director of the NeuroErgonomics Lab, said her lab’s human-autonomy trust research stemmed from a series of projects on human-robot interactions in safety-critical work domains.

Artificial intelligence discovers life-changing drug and human trials have begun

ARTIFICIAL intelligence has discovered a new life-changing drug and human trials are already underway.

The biotech company behind the breakthrough has dosed its first patient with an AI-developed treatment for ALS patients.

Alice Zhang, 33, is the founder of Verge Genomics and a former neuroscience doctoral student at University of California.

Meta’s newest AI determines proper protein folds 60 times faster

Life on Earth would not exist as we know it, if not for the protein molecules that enable critical processes from photosynthesis and enzymatic degradation to sight and our immune system. And like most facets of the natural world, humanity has only just begun to discover the multitudes of protein types that actually exist. But rather scour the most inhospitable parts of the planet in search of novel microorganisms that might have a new flavor of organic molecule, Meta researchers have developed a first-of-its-kind metagenomic database, the ESM Metagenomic Atlas, that could accelerate existing protein-folding AI performance by 60x.

Metagenomics is just coincidentally named. It is a relatively new, but very real, scientific discipline that studies “the structure and function of entire nucleotide sequences isolated and analyzed from all the organisms (typically microbes) in a bulk sample.” Often used to identify the bacterial communities living on our skin or in the soil, these techniques are similar in function to gas chromatography, wherein you’re trying to identify what’s present in a given sample system.

Similar databases have been launched by the NCBI, the European Bioinformatics Institute, and Joint Genome Institute, and have already cataloged billions of newly uncovered protein shapes. What Meta is bringing to the table is “a new protein-folding approach that harnesses large language models to create the first comprehensive view of the structures of proteins in a metagenomics database at the scale of hundreds of millions of proteins,” according to a Tuesday release from the company. The problem is that, while advances of genomics have revealed the sequences for slews of novel proteins, just knowing what those sequences are doesn’t actually tell us how they fit together into a functioning molecule and going figuring it out experimentally takes anywhere from a few months to a few years. Per molecule. Ain’t nobody got time for that.

A system that allows users to communicate with others remotely while embodying a humanoid robot

Recent technological advancements are opening new and exciting opportunities for communicating with others and visiting places remotely. These advancements include telepresence robots, moving robotic systems that allow users to virtually navigate remote environments and interact with people in these environments.

Researchers at Hanyang University and Duksung Women’s University in South Korea have recently developed a promising telepresence system based on a humanoid robot, a head mounted display, a motion transporter, a voice transporter, and a vision transporter system.

This system, introduced in a paper published in the International Journal of Social Robotics, allows to take full-body ownership of a humanoid robot’s body, thus accessing remote environments and interacting with both humans and objects in these environments as if they were physically there.