In a major breakthrough, scientists have built a tool to predict the odour profile of a molecule, just based on its structure. It can identify molecules that look different but smell the same, as well as molecules that look very similar but smell totally different.
Professor Jane Parker, University of Reading, said: Vision research has wavelength, hearing research has frequency – both can be measured and assessed by instruments. But what about smell? We don’t currently have a way to measure or accurately predict the odour of a molecule, based on its molecular structure.
You can get so far with current knowledge of the molecular structure, but eventually you are faced with numerous exceptions where the odour and structure don’t match. This is what has stumped previous models of olfaction. The fantastic thing about this new ML generated model is that it correctly predicts the odour of those exceptions.
The current crop of AI robots has made giant leaps when it comes to tiny activities.
There are robots performing colonoscopies, conducting microsurgeries on blood vessels and nerve cells, designing circuit boards, constructing delicate timepieces and conducting fine touch-up operations on fading, aging classical paintings by the masters.
Robots are able to handle delicate objects thanks to what researchers call passive compliance. That is the ability to change their state in response to specific tasks.
Other questions to the experts in this canvassing invited their views on the hopeful things that will occur in the next decade and for examples of specific applications that might emerge. What will human-technology co-evolution look like by 2030? Participants in this canvassing expect the rate of change to fall in a range anywhere from incremental to extremely impactful. Generally, they expect AI to continue to be targeted toward efficiencies in workplaces and other activities, and they say it is likely to be embedded in most human endeavors.
The greatest share of participants in this canvassing said automated systems driven by artificial intelligence are already improving many dimensions of their work, play and home lives and they expect this to continue over the next decade. While they worry over the accompanying negatives of human-AI advances, they hope for broad changes for the better as networked, intelligent systems are revolutionizing everything, from the most pressing professional work to hundreds of the little “everyday” aspects of existence.
One respondent’s answer covered many of the improvements experts expect as machines sit alongside humans as their assistants and enhancers. An associate professor at a major university in Israel wrote, “In the coming 12 years AI will enable all sorts of professions to do their work more efficiently, especially those involving ‘saving life’: individualized medicine, policing, even warfare (where attacks will focus on disabling infrastructure and less in killing enemy combatants and civilians). In other professions, AI will enable greater individualization, e.g., education based on the needs and intellectual abilities of each pupil/student. Of course, there will be some downsides: greater unemployment in certain ‘rote’ jobs (e.g., transportation drivers, food service, robots and automation, etc.).”
CAMBRIDGE, MA —The Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University announces the appointment of Dr. Kanaka Rajan, the first faculty member hired within the recently launched Kempner Institute. As a founding faculty member at the Kempner, Dr. Rajan will serve as an institute investigator. She will also have a dual appointment, serving as a member of the faculty in the Department of Neurobiology at Harvard Medical School.
Working jointly with the HMS Department of Neurobiology and the Kempner Institute, Dr. Rajan will support the intersecting research, scientific, and educational missions of both communities. Dr. Rajan starts in September 2023.
“We are thrilled to have Dr. Rajan join the Kempner, where she will play a key role in helping to shape and advance the institute’s research program,” said Kempner Co-Director Bernardo Sabatini. “She is a true leader in the field, using innovative techniques to tackle big, difficult questions, and expanding the possibilities for how we use artificial intelligence and machine learning to understand the enduring mysteries of the brain.”
Evolutionary robotics is a sub-field of robotics aimed at developing artificial “organisms” that can improve their capabilities and body configuration in response to their surroundings, just as humans and animals evolve, adapting their skills and appearance over time. A growing number of roboticists have been trying to develop these evolvable robotic systems, leveraging recent artificial intelligence (AI) advances.
A key challenge in this field is to effectively transfer robots from simulations to real-world environments without compromising their performance and abilities. A paper by researchers at University of York, Edinburgh Napier University, Vrije Universiteit Amsterdam, University of the West of England and University of Sunderland, published in Frontiers in Robotics and AI, investigated the impact that hardware can have on the development space of evolvable robots.
“One of the greatest challenges for evolutionary robotics is bringing it into the hardware space and creating real, useful robots,” Mike Angus, a research engineer who designed hardware for the study, told Tech Xplore.
“Operating and navigating in home environments is very challenging for robots. Every home is unique, with a different combination of objects in distinct configurations that change over time. To address the diversity a robot faces in a home environment, we teach the robot to perform arbitrary tasks with a variety of objects, rather than program the robot to perform specific predefined tasks with specific objects. In this way, the robot learns to link what it sees with the actions it is taught. When the robot sees a specific object or scenario again, even if the scene has changed slightly, it knows what actions it can take with respect to what it sees.
We teach the robot using an immersive telepresence system, in which there is a model of the robot, mirroring what the robot is doing. The teacher sees what the robot is seeing live, in 3D, from the robot’s sensors. The teacher can select different behaviors to instruct and then annotate the 3D scene, such as associating parts of the scene to a behavior, specifying how to grasp a handle, or drawing the line that defines the axis of rotation of a cabinet door. When teaching a task, a person can try different approaches, making use of their creativity to use the robot’s hands and tools to perform the task. This makes leveraging and using different tools easy, allowing humans to quickly transfer their knowledge to the robot for specific situations.
Historically, robots, like most automated cars, continuously perceive their surroundings, predict a safe path, then compute a plan of motions based on this understanding. At the other end of the spectrum, new deep learning methods compute low-level motor actions directly from visual inputs, which requires a significant amount of data from the robot performing the task. We take a middle ground. Our teaching system only needs to understand things around it that are relevant to the behavior being performed. Instead of linking low-level motor actions to what it sees, it uses higher-level behaviors. As a result, our system does not need prior object models or maps. It can be taught to associate a given set of behaviors to arbitrary scenes, objects, and voice commands from a single demonstration of the behavior. This also makes the system easy to understand and makes failure conditions easy to diagnose and reproduce.”
Everyone’s experienced the regret of telling a secret they should’ve kept. Once that information is shared, it can’t be taken back. It’s just part of the human experience.
Now it’s part of the AI experience, too. Whenever someone shares something with a generative AI tool — whether it’s a transcript they’re trying to turn into a paper or financial data they’re attempting to analyze — it cannot be taken back.
Generative AI solutions such as ChatGPT and Google’s Bard have been dominating headlines. The technologies show massive promise for a myriad of use cases and have already begun to change the way we work. But along with these big new opportunities come big risks.
Generative AI is dominating the conversation in 2023, and the design community is no exception to its transformative potential. Product innovations fueled by emerging AI capabilities have the potential to unlock new opportunities and put the power of real-time intelligence in customers’ hands like never before.
As a design leader focused on creating innovative products and solutions for millions of our consumers and for thousands of our employees, I find AI’s potential particularly exciting for the design discipline. New technological advances like generative AI, computer vision, natural language processing and large language models can augment, complement and elevate the capabilities of designers, enabling them to focus on work that delivers maximum value to their users. At the same time, there are ongoing and important conversations about designing and implementing new safeguards and frameworks to mitigate risk and ensure the responsible application of AI.
Let’s take a closer look at the dynamic intersection of AI and design, focusing on how AI-enhanced design tools can enhance designer workflows, improve outputs and fuel product innovation.