Toggle light / dark theme

Materials scientists aim to develop biomimetic soft robotic crawlers including earthworm-like and inchworm-like crawlers to realize locomotion via in-plane and out-of-plane contractions for a variety of engineering applications. While such devices can show effective motion in confined spaces, it is challenging to miniaturize the concept due to complex and limited actuation. In a new report now published in Science Advances, Qiji Ze and a team of scientists in mechanical engineering and aerospace engineering at Stanford University and the Ohio State University, U.S., described a magnetically actuated, small-scale origami crawler exhibiting in-plane contraction. The team achieved contraction mechanisms via a four-unit Kresling origami assembly to facilitate the motion of an untethered robot with crawling or steering capacity. The crawler overcame large resistances in severely confined spaces due to its magnetically tunable structural stiffness and anisotropy. The setup provided a contraption for drug storage and release with potential to serve as a minimally invasive device in biomedicine.

Navigating complicated terrains

Bioinspired crawling motion shows adaptation to complicated terrains due to its soft deformable dimensions. Researchers aim to engineer crawling for a variety of applications in limited or confined environments, including extraterrestrial exploration, tube inspection, and gastrointestinal endoscopy. Origami provides an appropriate method to generate contraction relative to structural folding, which can be adapted to engineer robotic crawlers. The team described Kresling patterns; a specific type of bioinspired, origami pattern used to generate axial contraction under torque or compressive force, coupled with a twist from the relative rotation of the device units. Ze et al illustrated a magnetically actuated small-scale origami crawler to induce effective in-plane crawling motions. The scientists developed a four-unit Kresling assembly and verified torque distribution on the crawler using finite element analysis to induce motion.

When artificial intelligence systems encounter scenes where objects are not fully visible, they have to make estimations based only on the visible parts of the objects. This partial information leads to detection errors, and large training data is required to correctly recognize such scenes. Now, researchers at the Gwangju Institute of Science and Technology have developed a framework that allows robot vision to detect such objects successfully in the same way that we perceive them.

Robotic vision has come a long way, reaching a level of sophistication with applications in complex and demanding tasks, such as autonomous driving and object manipulation. However, it still struggles to identify individual objects in cluttered scenes where some objects are partially or completely hidden behind others. Typically, when dealing with such scenes, systems are trained to identify the occluded object based only on its visible parts. But such training requires large datasets of objects and can be pretty tedious.

Associate Professor Kyoobin Lee and Ph.D. student Seunghyeok Back from the Gwangju Institute of Science and Technology (GIST) in Korea found themselves facing this problem when they were developing an artificial intelligence system to identify and sort objects in cluttered scenes. “We expect a robot to recognize and manipulate objects they have not encountered before or been trained to recognize. In reality, however, we need to manually collect and label data one by one as the generalizability of deep neural networks depends highly on the quality and quantity of the training dataset,” says Mr. Back.

Phishing attacks are cyber-attacks through which criminals trick users into sending them money and sensitive information, or into installing malware on their computer, by sending them deceptive emails or messages. As these attacks have become increasingly widespread, developers have been trying to develop more advanced tools to detect them and protect potential victims.

Researchers at Monash University and CSIRO’s Data61 in Australia have recently developed a machine learning-based approach that could help users to identify phishing emails, so that they don’t inadvertently install or send sensitive data to cyber-criminals. This model was introduced in a paper pre-published on arXiv and set to be presented at AsiaCCS 2022, a cyber-security conference.

“We have identified a gap in current phishing research, namely realizing that existing literature focuses on rigorous ‘black and white’ methods to classify whether something is a phishing email or not,” Tingmin (Tina) Wu, one of the researchers who carried out the study, told TechXplore.

Safety avionics specialist Iris Automation has made a meteorological enhancement to its Casia G ground-based surveillance system with the integration of TruWeather Solutions sensors and services – a move aiming to add climate security to the company’s aerial detect-and-avoid protection.

Addition of a precision weather utility was a natural step in Iris Automation’s wider objective of ensuring flight safety of, and between, crewed aircraft and drones The company says local micro weather and low-altitude atmospheric conditions often differ considerably from those at higher levels. That differential creates a larger degree of weather uncertainty for aerial service providers, who weigh safety factors heavily into whether they make flights as planned or not.

Remote work is expanding into many other areas besides office work. Robots and remote-control technology make a greater range of tasks possible, from stocking convenience stores, to operating heavy machinery and even serving as a labor force in space. A key advantage of remote-controlled robots is that they do not require the kind of complex programming found in automated robots, such as industrial robots that work in factories. This means that remote-controlled robots are more flexible, easily adapting to work that cannot be programmed. Greater use of this technology can allow robots to take over dangerous and exhausting work, subsequently helping to deal with labor shortages and improve work environments. In this episode, we’ll look at the forefront of remote robotics, and see examples of how this technology could transform work.

[J-Innovators]

A muscle suit for back protection.

Elon Musk talks to Chris Anderson, head and curator of the TED media organisation, about the challenges facing humanity in the coming decades – and why we should be more optimistic.

They discuss climate change, clean energy, electric vehicles, the rise of AI and robotics, brain-computer interfaces, self-driving cars, the revolutionary potential of reusable rockets and the forthcoming missions to Mars, as well as the other projects he is working on.

Musk, who has an estimated net worth of $273 billion, provides insight into his work ethos and status as the world’s richest man. He also clarifies the accuracy and thought processes behind his future predictions.

Imagine a future in which you could 3D-print an entire robot or stretchy, electronic medical device with the press of a button—no tedious hours spent assembling parts by hand.

That possibility may be closer than ever thanks to a recent advancement in 3D-printing technology led by engineers at CU Boulder. In a new study, the team lays out a strategy for using currently-available printers to create materials that meld solid and liquid components—a tricky feat if you don’t want your robot to collapse.

“I think there’s a future where we could, for example, fabricate a complete system like a robot using this process,” said Robert MacCurdy, senior author of the study and assistant professor in the Paul M. Rady Department of Mechanical Engineering.

Remote robotic-assisted surgery is far from new, with various educational and research institutions developing machines doctors can control from other locations over the years. There hasn’t been a lot of movement on that front when it comes to endovascular treatments for stroke patients, which is why a team of MIT engineers has been developing a telerobotic system surgeons can use over the past few years. The team, which has published its paper in Science Robotics, has now presented a robotic arm that doctors can control remotely using a modified joystick to treat stroke patients.

That arm has a magnet attached to its wrist, and surgeons can adjust its orientation to guide a magnetic wire through the patient’s arteries and vessels in order to remove blood clots in their brain. Similar to in-person procedures, surgeons will have to rely on live imaging to get to the blood clot, except the machine will allow them to treat patients not physically in the room with them.

There’s a critical window of time after a stroke’s onset during which endovascular treatment should be administered to save a patient’s life or to preserve their brain function. Problem is, the procedure is quite complex and takes years to master. It involves guiding a thin wire through vessels and arteries without damaging any of them, after all. Neurosurgeons trained in the procedure are usually found in major hospitals, and patients in remote locations that have to be transported to these larger centers might miss that critical time window. With this machine, surgeons can be anywhere and still perform the procedure. Another upside? It minimizes the doctos’ exposure to radiation from X-ray imaging.