Toggle light / dark theme

“It’s a time of huge uncertainty,” says Geoffrey Hinton from the living room of his home in London. “Nobody really knows what’s going to happen … I’m just sounding the alarm.”

In The Godfather in Conversation, the cognitive psychologist and computer scientist ‘known as the Godfather of AI’ explains why, after a lifetime spent developing a type of artificial intelligence known as deep learning, he is suddenly warning about existential threats to humanity.

A University of Toronto University Professor Emeritus, Hinton explains how neural nets work, the role he and others played in developing them and why the kind of digital intelligence that powers ChatGPT and Google’s PaLM may hold an unexpected advantage over our own. And he lays out his concerns about how the world could lose control of a technology that, paradoxically, also promises to unleash huge benefits – from treating diseases to combatting climate change.

Deep Learning (DL) performs classification tasks using a series of layers. To effectively execute these tasks, local decisions are performed progressively along the layers. But can we perform an all-encompassing decision by choosing the most influential path to the output rather than performing these decisions locally?

In an article published today in Scientific Reports, researchers from Bar-Ilan University in Israel answer this question with a resounding “yes.” Pre-existing deep architectures have been improved by updating the most influential paths to the output.

“One can think of it as two children who wish to climb a mountain with many twists and turns. One of them chooses the fastest local route at every intersection while the other uses binoculars to see the entire ahead and picks the shortest and most significant route, just like Google Maps or Waze. The first child might get a , but the second will end up winning,” said Prof. Ido Kanter, of Bar-Ilan’s Department of Physics and Gonda (Goldschmied) Multidisciplinary Brain Research Center, who led the research.

Join Dr. Ben Goertzel, the visionary CEO and Founder of SingularityNET, as he delves into the compelling realm of large language models. In this Dublin Tech Summit keynote presentation, Dr. Goertzel will navigate the uncharted territories of AI, discussing the imminent impact of large language models on innovation across industries. Discover the intricacies, challenges, and prospects of developing and deploying these transformative tools. Gain insights into the future of AI, as Dr. Goertzel unveils his visionary perspective on the role of large language models in shaping the AI landscape. Tune in to explore the boundless potentials of AI and machine learning in this thought-provoking session.

Themes: AI & Machine Learning | Innovation | Future of Technology | Language Models | Industry Transformation.
Keynote: Dr. Ben Goertzel, CEO and Founder, SingularityNET
#dubtechsummit

Robots based on soft materials are often better at replicating the appearance, movements and abilities of both humans and animals. While there are now countless soft robots, many of these are difficult to produce on a large-scale, due to the high cost of their components or their complex fabrication process.

Researchers at University of Coimbra in Portugal recently developed a new soft robotic hand that could be more affordable and easier to fabricate. Their design, introduced in Cyborg and Bionic Systems, integrates soft actuators with an exoskeleton, both of which can be produced using scalable techniques.

“Most robots are made of rigid materials,” Pedro Neto, one of the researchers who carried out the study, told Tech Xplore. “However, when we observe animals, we notice that their bodies can be composed of hard parts (skeletons) and soft parts (such as muscles). Some animals, like earthworms, are entirely soft-bodied. Taking inspiration from nature, we anticipate that the next generation of robots will incorporate components made of or, in some cases, they can be entirely soft-bodied.”

Large language models (LLMs) have become a general-purpose approach to embodied artificial intelligence problem-solving. When agents need to understand the semantic nuances of their environment for efficient control, LLMs’ reasoning skills are crucial in embodied AI. Recent methods, which they refer to as “programs of thought,” use programming languages as an improved prompting system for challenging reasoning tasks. Program-of-thought prompting separates the issues into executable code segments and deals with them one at a time, unlike chain-of-thought prompting. However, the relationship between the use of programming languages and the development of LLMs’ thinking skills has yet to receive enough research. When does program-of-thought suggesting work for reasoning2 remain the crucial question?

The complexity-impacted reasoning score (CIRS), a thorough metric for the link between code reasoning stages and their effects on LLMs’ reasoning abilities, is proposed in this paper. They contend that programming languages are inherently superior to serialized natural language because of their improved modeling of complex structures. Their innate procedure-oriented logic aids in solving difficulties involving several steps in thinking. Because of this, their suggested measure assesses the code complexity from both a structural and a logical standpoint. In particular, they compute the structural complexity of code reasoning stages (rationales) using an abstract syntax tree (AST). Their method uses three AST indicators (node count, node type, and depth) to keep all structural information in AST represented as a tree, which thoroughly comprehends code structures.

Researchers from Zhejiang University, Donghai Laboratory and National University of Singapore develop a way to determine logical complexity by combining coding difficulty with cyclomatic complexity, drawing inspiration from Halsted and McCabe’s idea. Thus, it is possible to consider the code’s operators, operands, and control flow. They can explicitly calculate the logic’s complexity within the code. They discover through an empirical investigation using their suggested CIRS that present LLMs have a restricted comprehension of symbolic information like code and that not all sophisticated code data can be taught and understood by LLMs. Low-complexity code blocks lack the necessary information, but high-complexity code blocks could be too challenging for LLMs to understand. To effectively improve the reasoning abilities of LLMs, only code data with an appropriate amount of complexity (structure & logic), both basic and detailed, are needed.

In a major breakthrough, scientists have built a tool to predict the odor profile of a molecule, just based on its structure. It can identify molecules that look different but smell the same, as well as molecules that look very similar but smell totally different. The research was published in Science.

Professor Jane Parker, University of Reading, said, “Vision research has wavelength, hearing research has frequency—both can be measured and assessed by instruments. But what about ? We don’t currently have a way to measure or accurately predict the odor of a molecule, based on its .”

“You can get so far with current knowledge of the molecular structure, but eventually you are faced with numerous exceptions where the odor and structure don’t match. This is what has stumped previous models of olfaction. The fantastic thing about this new ML generated model is that it correctly predicts the odor of those exceptions.”

But most deep learning models are loosely based on the brain’s inner workings. AI agents are increasingly endowed with human-like decision-making algorithms. The idea that machine intelligence could become sentient one day no longer seems like science fiction.

How could we tell if machine brains one day gained sentience? The answer may be based on our own brains.

A preprint paper authored by 19 neuroscientists, philosophers, and computer scientists, including Dr. Robert Long from the Center for AI Safety and Dr. Yoshua Bengio from the University of Montreal, argues that the neurobiology of consciousness may be our best bet. Rather than simply studying an AI agent’s behavior or responses—for example, during a chat—matching its responses to theories of human consciousness could provide a more objective ruler.

Apple may not be as flashy as other companies in adopting artificial intelligence features. Still, the already has a lot of smarts scattered throughout iOS.

Apple does not go out of its way to specifically name-drop “artificial intelligence” or AI meaningfully, but the company isn’t avoiding the technology. Machine learning has become Apple’s catch-all for its AI initiatives.

Apple uses artificial intelligence and machine learning in iOS in several noticeable ways. Here is a quick breakdown of where you’ll find it.

Rice University scientists are starting small as they begin to figure out how to build an artificial brain from the bottom up.

Electrical and computer engineer Jacob Robinson of Rice’s Brown School of Engineering and Celina Juliano, an assistant professor of molecular and cellular biology at the University of California, Davis, have won a $1 million Keck Foundation grant to advance the team’s synthetic neurobiology effort to define the connections between neurons and muscles that drive programmed behaviors in living animals.

To begin with, Robinson and his colleagues are putting their faith in a very small animal, the freshwater cnidarian Hydra vulgaris, a tiny tentacled creature that has long been a focus of study in the Robinson and Juliano labs. Because they are small, squishy and transparent, they’re easy to manipulate and measure through Robinson’s custom microfluidic platforms.