Toggle light / dark theme

In order to effectively navigate real-world environments, legged robots should be able to move swiftly and freely while maintaining their balance. This is particularly true for humanoid robots, robots with two legs and a human-like body structure.

Building robots that are stable on their legs while walking can be challenging. In fact, legged robots typically have unstable dynamics, due to their pendulum-like structure.

Researchers at Hong Kong University of Science and Technology recently developed a computer vision-based robotic foot with tactile sensing capabilities. When integrated at the end of a ’s legs, the artificial foot can increase a robot’s balance and stability during locomotion.

Yes, but they wont be trusted til 2035.


Current trends in AI use in healthcare lead me to posit that this market will significantly grow in the coming years. So, should leaders in healthcare expect the emergence of a fully automated electronic physician, sonographer or surgeon as a replacement for the human healthcare professional? Can the development of AI in healthcare help overcome the difficulties the industry faces today? To figure all this out, I would like to analyze the current challenges of using AI in healthcare.

Let’s discuss two promising examples: the application of AI in diagnosis and reading images, and the use of robotic systems in surgery.

Diagnostic Robots: Accuracy And Use For Treatment Recommendations

Protocol to reverse engineer Hamiltonian models advances automation of quantum devices.

Scientists from the University of Bristol ’s Quantum Engineering Technology Labs (QETLabs) have developed an algorithm that provides valuable insights into the physics underlying quantum systems — paving the way for significant advances in quantum computation and sensing, and potentially turning a new page in scientific investigation.

In physics, systems of particles and their evolution are described by mathematical models, requiring the successful interplay of theoretical arguments and experimental verification. Even more complex is the description of systems of particles interacting with each other at the quantum mechanical level, which is often done using a Hamiltonian model. The process of formulating Hamiltonian models from observations is made even harder by the nature of quantum states, which collapse when attempts are made to inspect them.

SAN FRANCISCO — Kleos Space is conducting a six-month test of technology for in-space manufacturing of large 3D carbon fiber structures that could be used to construct solar arrays, star shades and interferometry antennas.

The company with operations in Luxembourg, the United States and United Kingdom is best known for radio frequency reconnaissance satellites. In the background, however, Kleos has been designing and developing in-space manufacturing technology called Futrism to robotically produce a carbon-fiber I-beam with embedded fiber-optic cables that is more than 100 meters long.

“It’s something that we have linked to our roadmap for RF, because it’s something that could deploy very large antennas for RF reconnaissance,” Kleos CEO Andy Bowyer told SpaceNews. “However, it’s useful for a whole range of other applications as well that we are very keen to work with partners on. We firmly believe that manufacturing in space is the future.”

Machine learning is capable of doing all sorts of things as long as you have the data to teach it how. That’s not always easy, and researchers are always looking for a way to add a bit of “common sense” to AI so you don’t have to show it 500 pictures of a cat before it gets it. Facebook’s newest research takes a big step toward reducing the data bottleneck.

The company’s formidable AI research division has been working for years now on how to advance and scale things like advanced computer vision algorithms, and has made steady progress, generally shared with the rest of the research community. One interesting development Facebook has pursued in particular is what’s called “semi-supervised learning.”

Generally when you think of training an AI, you think of something like the aforementioned 500 pictures of cats — images that have been selected and labeled (which can mean outlining the cat, putting a box around the cat or just saying there’s a cat in there somewhere) so that the machine learning system can put together an algorithm to automate the process of cat recognition. Naturally if you want to do dogs or horses, you need 500 dog pictures, 500 horse pictures, etc. — it scales linearly, which is a word you never want to see in tech.

Still calling 2025 for the debut of a robotic set of human level hands.


Although robotic devices are used in everything from assembly lines to medicine, engineers have a hard time accounting for the friction that occurs when those robots grip objects – particularly in wet environments. Researchers have now discovered a new law of physics that accounts for this type of friction, which should advance a wide range of robotic technologies.

“Our work here opens the door to creating more reliable and functional haptic and robotic devices in applications such as telesurgery and manufacturing,” says Lilian Hsiao, an assistant professor of chemical and biomolecular engineering at North Carolina State University and corresponding author of a paper on the work.

At issue is something called elastohydrodynamic lubrication (EHL) friction, which is the friction that occurs when two solid surfaces come into contact with a thin layer of fluid between them. This would include the friction that occurs when you rub your fingertips together, with the fluid being the thin layer of naturally occurring oil on your skin. But it could also apply to a robotic claw lifting an object that has been coated with oil, or to a surgical device that is being used inside the human body.

Scientists from the University of Bristol’s Quantum Engineering Technology Labs (QETLabs) have developed an algorithm that provides valuable insights into the physics underlying quantum systems—paving the way for significant advances in quantum computation and sensing, and potentially turning a new page in scientific investigation.

A seabed mining robot being tested on the Pacific Ocean floor at a depth of more than 4 km (13000 ft) has become detached, the Belgian company running the experimental trial said on Wednesday.

Global Sea Mineral Resources (GSR), the deep-sea exploratory division of dredging company DEME Group, has been testing Patania II, a 25-tonne mining robot prototype, in its concession in the Clarion Clipperton Zone since April 20.

The machine is meant to collect the potato-sized nodules rich in cobalt and other battery metals that pepper the seabed in this area, and was connected to GSR’s ship with a 5km cable.

The project is a part of a much wider effort to bring artificial intelligence into the operating room. Using many of the same technologies that underpin self-driving cars, autonomous drones and warehouse robots, researchers are working to automate surgical robots too. These methods are still a long way from everyday use, but progress is accelerating.


Real scalpels, artificial intelligence — what could go wrong?

Leading industrial companies are using artificial intelligence to analyze data from their manufacturing tracking systems to spot the causes of potential defects in real-time.

Robert Bosch GmbH is one of the latest to deploy AI to analyze data from its manufacturing execution systems, as the monitoring and tracking systems are called. General Electric Co. and Siemens AG have already deployed such systems.