Toggle light / dark theme

AI image generators, which create fantastical sights at the intersection of dreams and reality, bubble up on every corner of the web. Their entertainment value is demonstrated by an ever-expanding treasure trove of whimsical and random images serving as indirect portals to the brains of human designers. A simple text prompt yields a nearly instantaneous image, satisfying our primitive brains, which are hardwired for instant gratification.

Although seemingly nascent, the field of AI-generated art can be traced back as far as the 1960s with early attempts using symbolic rule-based approaches to make technical images. While the progression of models that untangle and parse words has gained increasing sophistication, the explosion of generative art has sparked debate around copyright, disinformation, and biases, all mired in hype and controversy.

Yilun Du, a Ph.D. student in the Department of Electrical Engineering and Computer Science and affiliate of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), recently developed a new method that makes models like DALL-E 2 more creative and have better scene understanding. Here, Du describes how these models work, whether this technical infrastructure can be applied to other domains, and how we draw the line between AI and human creativity.

Over the last three decades, the digital world that we access through smartphones and computers has grown so rich and detailed that much of our physical world has a corresponding life in this digital reality. Today, the physical and digital realities are on a steady course to merging, as robots, Augmented Reality (AR) and wearable digital devices enter our physical world, and physical items get their digital twin computer representations in the digital world.

These digital twins can be uniquely identified and protected from manipulation thanks to crypto technologies like blockchains. The trust that these technologies provide is extremely powerful, helping to fight counterfeiting, increase supply chain transparency, and enable the circular economy. However, a weak point is that there is no versatile and generally applicable identifier of physical items that is as trustworthy as a blockchain. This breaks the connection between the physical and digital twins and therefore limits the potential of technical solutions.

In a new paper published in Light: Science & Applications, an interdisciplinary team of scientists led by Professors Jan Lagerwall (physics) and Holger Voos (robotics) from the University of Luxembourg, Luxembourg, and Prof. Mathew Schwartz (architecture, construction of the built environment) from the New Jersey Institute of Technology, U.S., propose an innovative solution to this problem where physical items are given unique and unclonable fingerprints realized using cholesteric spherical reflectors, or CSRs for short.

For decades researchers have worked to design robotic hands that mimic the dexterity of human hands in the ways they grasp and manipulate objects. However, these earlier robotic hands have not been able to withstand the physical impacts that can occur in unstructured environments. A research team has now developed a compact robotic finger for dexterous hands, while also being capable of withstanding physical impacts in its working environment.

The team of researchers from Harbin University of Technology (China) published their work in the journal Frontiers of Mechanical Engineering on October 14, 2022.

Robots often work in environments that are unpredictable and sometimes unsafe. Physical collisions cannot be avoided when multi-fingered robotic hands are required to work in unstructured environments, such as settings where obstacles move quickly or the robot is required to interact with humans or other robots.

Researchers have developed a hackable and multi-functional 3D printer for soft materials that is affordable and open design. The technology has the potential to unlock further innovation in diverse fields, such as tissue engineering, soft robotics, food, and eco-friendly material processing—aiding the creation of unprecedented designs.

The findings could help pave the way for greater use of machine learning in materials science, a field that still relies heavily on laboratory experimentation. Also, the technique of using machine learning to make predictions that are then checked in the lab could be adapted for discovery in other fields, such as chemistry and physics, say experts in materials science.

To understand why it’s a significant development, it’s worth looking at the traditional way new compounds are usually created, says Michael Titus, an assistant professor of materials engineering at Purdue University, who was not involved in the research. The process of tinkering in the lab is painstaking and inefficient.

New research from Carnegie Mellon University’s Robotics Institute can help robots feel layers of cloth rather than relying on computer vision tools to only see it. The work could allow robots to assist people with household tasks like folding laundry.

Humans use their senses of sight and touch to grab a glass or pick up a piece of cloth. It is so routine that little thought goes into it. For robots, however, these tasks are extremely difficult. The amount of data gathered through touch is hard to quantify and the sense has been hard to simulate in robotics—until recently.

“Humans look at something, we reach for it, then we use touch to make sure that we’re in the right position to grab it,” said David Held, an assistant professor in the School of Computer Science and head of the Robots Perceiving and Doing (R-Pad) Lab. “A lot of the tactile sensing humans do is natural to us. We don’t think that much about it, so we don’t realize how valuable it is.”

The company that will work with US Space Force has also won some NASA contracts.

It’s official: robots are here to stay in space. Robotics software and engineering company PickNik Robotics announced on Tuesday that it has won a SpaceWERX contract to work on robotics for the US Space Force, according to a press release acquired by IE

In addition, the company recently won a NASA Small Business Innovation Research (SBIR) Phase I contract for continued work on supervised autonomy for space robotics, as well as a Colorado Advanced Industries Accelerator (AIA) grant for space robotics.

Three wins.


The company has three big wins: a SpaceWERX contract, a NASA Small Business Innovation Research (SBIR) Phase I contract and a Colorado Advanced Industries Accelerator (AIA) grant for space robotics.