Toggle light / dark theme

This AI Paper Introduces the Complexity-Impacted Reasoning Score (CIRS): Evaluating the Role of Code Complexity in Enhancing the Reasoning Abilities of Large Language Models

Large language models (LLMs) have become a general-purpose approach to embodied artificial intelligence problem-solving. When agents need to understand the semantic nuances of their environment for efficient control, LLMs’ reasoning skills are crucial in embodied AI. Recent methods, which they refer to as “programs of thought,” use programming languages as an improved prompting system for challenging reasoning tasks. Program-of-thought prompting separates the issues into executable code segments and deals with them one at a time, unlike chain-of-thought prompting. However, the relationship between the use of programming languages and the development of LLMs’ thinking skills has yet to receive enough research. When does program-of-thought suggesting work for reasoning2 remain the crucial question?

The complexity-impacted reasoning score (CIRS), a thorough metric for the link between code reasoning stages and their effects on LLMs’ reasoning abilities, is proposed in this paper. They contend that programming languages are inherently superior to serialized natural language because of their improved modeling of complex structures. Their innate procedure-oriented logic aids in solving difficulties involving several steps in thinking. Because of this, their suggested measure assesses the code complexity from both a structural and a logical standpoint. In particular, they compute the structural complexity of code reasoning stages (rationales) using an abstract syntax tree (AST). Their method uses three AST indicators (node count, node type, and depth) to keep all structural information in AST represented as a tree, which thoroughly comprehends code structures.

Researchers from Zhejiang University, Donghai Laboratory and National University of Singapore develop a way to determine logical complexity by combining coding difficulty with cyclomatic complexity, drawing inspiration from Halsted and McCabe’s idea. Thus, it is possible to consider the code’s operators, operands, and control flow. They can explicitly calculate the logic’s complexity within the code. They discover through an empirical investigation using their suggested CIRS that present LLMs have a restricted comprehension of symbolic information like code and that not all sophisticated code data can be taught and understood by LLMs. Low-complexity code blocks lack the necessary information, but high-complexity code blocks could be too challenging for LLMs to understand. To effectively improve the reasoning abilities of LLMs, only code data with an appropriate amount of complexity (structure & logic), both basic and detailed, are needed.

AI ‘nose’ predicts smells from molecular structures

In a major breakthrough, scientists have built a tool to predict the odor profile of a molecule, just based on its structure. It can identify molecules that look different but smell the same, as well as molecules that look very similar but smell totally different. The research was published in Science.

Professor Jane Parker, University of Reading, said, “Vision research has wavelength, hearing research has frequency—both can be measured and assessed by instruments. But what about ? We don’t currently have a way to measure or accurately predict the odor of a molecule, based on its .”

“You can get so far with current knowledge of the molecular structure, but eventually you are faced with numerous exceptions where the odor and structure don’t match. This is what has stumped previous models of olfaction. The fantastic thing about this new ML generated model is that it correctly predicts the odor of those exceptions.”

How Will We Know If AI Is Conscious? Neuroscientists Now Have a Checklist

But most deep learning models are loosely based on the brain’s inner workings. AI agents are increasingly endowed with human-like decision-making algorithms. The idea that machine intelligence could become sentient one day no longer seems like science fiction.

How could we tell if machine brains one day gained sentience? The answer may be based on our own brains.

A preprint paper authored by 19 neuroscientists, philosophers, and computer scientists, including Dr. Robert Long from the Center for AI Safety and Dr. Yoshua Bengio from the University of Montreal, argues that the neurobiology of consciousness may be our best bet. Rather than simply studying an AI agent’s behavior or responses—for example, during a chat—matching its responses to theories of human consciousness could provide a more objective ruler.

Apple is using machine learning everywhere in iOS

Apple may not be as flashy as other companies in adopting artificial intelligence features. Still, the already has a lot of smarts scattered throughout iOS.

Apple does not go out of its way to specifically name-drop “artificial intelligence” or AI meaningfully, but the company isn’t avoiding the technology. Machine learning has become Apple’s catch-all for its AI initiatives.

Apple uses artificial intelligence and machine learning in iOS in several noticeable ways. Here is a quick breakdown of where you’ll find it.

Researchers figure out how to build an artificial brain from the bottom up

Rice University scientists are starting small as they begin to figure out how to build an artificial brain from the bottom up.

Electrical and computer engineer Jacob Robinson of Rice’s Brown School of Engineering and Celina Juliano, an assistant professor of molecular and cellular biology at the University of California, Davis, have won a $1 million Keck Foundation grant to advance the team’s synthetic neurobiology effort to define the connections between neurons and muscles that drive programmed behaviors in living animals.

To begin with, Robinson and his colleagues are putting their faith in a very small animal, the freshwater cnidarian Hydra vulgaris, a tiny tentacled creature that has long been a focus of study in the Robinson and Juliano labs. Because they are small, squishy and transparent, they’re easy to manipulate and measure through Robinson’s custom microfluidic platforms.

Large language models aren’t people. Let’s stop testing them as if they were

When Taylor Webb played around with GPT-3 in early 2022, he was blown away by what OpenAI’s large language model appeared to be able to do. Here was a neural network trained only to predict the next word in a block of text —a jumped-up autocomplete. And yet it gave correct answers to many of the abstract problems that Webb set for it—the kind of thing you’d find in an IQ test. “I was really shocked by its ability to solve these problems,” he says. “It completely upended everything I would have predicted.”

Webb is a psychologist at the University of California, Los Angeles, who studies the different ways people and computers solve abstract problems. He was used to building neural networks that had specific reasoning capabilities bolted on. But GPT-3 seemed to have learned them for free.

Google DeepMind has launched a watermarking tool for AI-generated images

It’s the first Big Tech firm to publicly launch one, after a group of them pledged to develop them at the White House in July.

The tool, called SynthID, will initially be available only to users of Google’s AI image generator Imagen, which is hosted on Google Cloud’s machine learning platform Vertex. Users will be able to generate images using Imagen and then choose whether to add a watermark or not. The hope is that it could help people tell when AI-generated content is being passed off as real, or help protect copyright.

Scientists engineer affordable safe soft robotic hand

The new robotic device is designed to be mass-produced.

Soft robotics are all the rage with researchers coming up with new and improved developments all the time. There are soft robots that mimic muscles, soft robots that squeeze into tiny places, soft robots that are designed to function like seals and even soft robots that split into smaller units.

There is a good reason why scientists are determined to keep producing these devices. The gentle machines hold a better promise of adapting well with human populations but so far have been notoriously expensive to engineer which made them difficult to mass produce.

Robots for household chores less than 10 years away: expert

The humanoid machine can undertake all kinds of general-purpose tasks.

A new report by the BBC.


A new report by the BBC is quoting Geordie Rose, the chief executive of Sanctuary AI, a firm engineering a robot for household chores and general-purpose tasks, and the expert has stated that the development is less than 10 years away.

Ten years: an eternity

“Ten years at the pace the technology is moving now is an eternity. You know, every month, there’s new developments in the AI world that are like fundamental change,” Rose told the news outlet.

/* */