Toggle light / dark theme

Borrowing from methods used to produce optical fibers, researchers from EPFL and Imperial College have created fiber-based soft robots with advanced motion control that integrate other functionalities, such as electric and optical sensing and targeted delivery of fluids.

In recent decades, catheter-based surgery has transformed medicine, giving doctors a minimally invasive way to do anything from placing stents and targeting tumors to extracting tissue samples and delivering contrast agents for medical imaging. While today’s catheters are highly engineered robotic devices, in most cases, the task of pushing them through the body to the site of intervention continues to be a manual and time-consuming procedure.

Combining advances in the development of functional fibers with developments in smart robotics, researchers from the Laboratory of Photonic Materials and Fiber Devices in EPFL’s School of Engineering have created multifunctional catheter-shaped soft robots that, when used as catheters, could be remotely guided to their destination or possibly even find their own way through semi-autonomous control. “This is the first time that we can generate soft catheter-like structures at such scalability that can integrate complex functionalities and be steered, potentially, inside the body,” says Fabien Sorin, the study’s principal investigator. Their work was published in the journal Advanced Science.

The West Japan Rail Company, also known as JR West, has unveiled its giant worker robot that can be tasked to carry out jobs that are considered risky for humans, New Atlas reported.

【News Release】 生産性・安全性向上に向けて、株式会社人機一体および日本信号株式会社と共同で、人型重機ロボットと鉄道工事用車両を融合させた多機能鉄道重機を開発しています。

詳しくはこちらをご覧ください。 https://www.westjr.co.jp/press/article/items/220415_01_robot.pdf pic.twitter.com/FBVjIe1xCC — JR西日本ニュース【公式】 (@news_jrwest) April 15, 2022

With the help of AI, researchers at Chalmers University of Technology, Sweden, have succeeded in designing synthetic DNA that controls the cells’ protein production. The technology can contribute to the development and production of vaccines, drugs for severe diseases, as well as alternative food proteins much faster and at significantly lower costs than today. How our genes are expressed is a process that is fundamental to the functionality of cells in all living organisms. Simply put, the genetic code in DNA is transcribed to the molecule messenger RNA (mRNA), which tells the cell’s factory which protein to produce and in which quantities.

Researchers have put a lot of effort into trying to control gene expression because it can, among other things, contribute to the development of protein-based drugs. A recent example is the mRNA vaccine against Covid-19, which instructed the body’s cells to produce the same protein found on the surface of the coronavirus. The body’s immune system could then learn to form antibodies against the virus. Likewise, it is possible to teach the body’s immune system to defeat cancer cells or other complex diseases if one understands the genetic code behind the production of specific proteins. Most of today’s new drugs are protein-based, but the techniques for producing them are both expensive and slow, because it is difficult to control how the DNA is expressed. Last year, a research group at Chalmers, led by Aleksej Zelezniak, Associate Professor of Systems Biology, took an important step in understanding and controlling how much of a protein is made from a certain DNA sequence.

“First it was about being able to fully ‘read’ the DNA molecule’s instructions. Now we have succeeded in designing our own DNA that contains the exact instructions to control the quantity of a specific protein,” says Aleksej Zelezniak about the research group’s latest important breakthrough. The principle behind the new method is similar to when an AI generates faces that look like real people. By learning what a large selection of faces looks like, the AI can then create completely new but natural-looking faces. It is then easy to modify a face by, for example, saying that it should look older, or have a different hairstyle. On the other hand, programming a believable face from scratch, without the use of AI, would have been much more difficult and time-consuming. Similarly, the researchers’ AI has been taught the structure and regulatory code of DNA. The AI then designs synthetic DNA, where it is easy to modify its regulatory information in the desired direction of gene expression.

Pests destroy up to 40% of the world’s crops each year, causing $220 billion in economic losses, according to the UN Food and Agriculture Organization (FAO). Trapview is harnessing the power of AI to help tackle the problem.

The Slovenian company has developed a device that traps and identifies pests, and acts as an advance warning system by predicting how they will spread.

“We’ve built the biggest database of pictures of insects in the world, which allows us to really use modern AI-based computing vision in the most optimal way,” says Matej Štefančič, CEO of Trapview and parent company EFOS.

Deepmind introduces a new research framework for AI agents in simulated environments such as video games that can interact more flexibly and naturally with humans.

AI systems have achieved great success in video games such as Dota or Starcraft, defeating human professional players. This is made possible by precise reward functions that are tuned to optimize game outcomes: Agents were trained using unique wins and losses calculated by computer code. Where such reward functions are possible, AI agents can sometimes achieve superhuman performance.

But often – especially for everyday human behaviors with open-ended outcomes – there is no such precise reward function.

Originally published on Towards AI the World’s Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses.

The model is able to transfer knowledge between a simulated environment and real-world settings.

Researchers at MIT’s Center for Bits and Atoms are working on an ambitious project, designing robots that effectively self-assemble. The team admits that the goal of an autonomous self-building robot is still “years away,” but the work has thus far demonstrated positive results.

At the system’s center are voxels (a term borrowed from computer graphics), which carry power and data that can be shared between pieces. The pieces form the foundation of the robot, grabbing and attaching additional voxels before moving across the grid for further assembly.

The researchers note in an associated paper published in Nature, “Our approach challenges the convention that larger constructions need larger machines to build them, and could be applied in areas that today either require substantial capital investments for fixed infrastructure or are altogether unfeasible.”