Toggle light / dark theme

Join Dr. Ben Goertzel, the visionary CEO and Founder of SingularityNET, as he delves into the compelling realm of large language models. In this Dublin Tech Summit keynote presentation, Dr. Goertzel will navigate the uncharted territories of AI, discussing the imminent impact of large language models on innovation across industries. Discover the intricacies, challenges, and prospects of developing and deploying these transformative tools. Gain insights into the future of AI, as Dr. Goertzel unveils his visionary perspective on the role of large language models in shaping the AI landscape. Tune in to explore the boundless potentials of AI and machine learning in this thought-provoking session.

Themes: AI & Machine Learning | Innovation | Future of Technology | Language Models | Industry Transformation.
Keynote: Dr. Ben Goertzel, CEO and Founder, SingularityNET
#dubtechsummit

Robots based on soft materials are often better at replicating the appearance, movements and abilities of both humans and animals. While there are now countless soft robots, many of these are difficult to produce on a large-scale, due to the high cost of their components or their complex fabrication process.

Researchers at University of Coimbra in Portugal recently developed a new soft robotic hand that could be more affordable and easier to fabricate. Their design, introduced in Cyborg and Bionic Systems, integrates soft actuators with an exoskeleton, both of which can be produced using scalable techniques.

“Most robots are made of rigid materials,” Pedro Neto, one of the researchers who carried out the study, told Tech Xplore. “However, when we observe animals, we notice that their bodies can be composed of hard parts (skeletons) and soft parts (such as muscles). Some animals, like earthworms, are entirely soft-bodied. Taking inspiration from nature, we anticipate that the next generation of robots will incorporate components made of or, in some cases, they can be entirely soft-bodied.”

Large language models (LLMs) have become a general-purpose approach to embodied artificial intelligence problem-solving. When agents need to understand the semantic nuances of their environment for efficient control, LLMs’ reasoning skills are crucial in embodied AI. Recent methods, which they refer to as “programs of thought,” use programming languages as an improved prompting system for challenging reasoning tasks. Program-of-thought prompting separates the issues into executable code segments and deals with them one at a time, unlike chain-of-thought prompting. However, the relationship between the use of programming languages and the development of LLMs’ thinking skills has yet to receive enough research. When does program-of-thought suggesting work for reasoning2 remain the crucial question?

The complexity-impacted reasoning score (CIRS), a thorough metric for the link between code reasoning stages and their effects on LLMs’ reasoning abilities, is proposed in this paper. They contend that programming languages are inherently superior to serialized natural language because of their improved modeling of complex structures. Their innate procedure-oriented logic aids in solving difficulties involving several steps in thinking. Because of this, their suggested measure assesses the code complexity from both a structural and a logical standpoint. In particular, they compute the structural complexity of code reasoning stages (rationales) using an abstract syntax tree (AST). Their method uses three AST indicators (node count, node type, and depth) to keep all structural information in AST represented as a tree, which thoroughly comprehends code structures.

Researchers from Zhejiang University, Donghai Laboratory and National University of Singapore develop a way to determine logical complexity by combining coding difficulty with cyclomatic complexity, drawing inspiration from Halsted and McCabe’s idea. Thus, it is possible to consider the code’s operators, operands, and control flow. They can explicitly calculate the logic’s complexity within the code. They discover through an empirical investigation using their suggested CIRS that present LLMs have a restricted comprehension of symbolic information like code and that not all sophisticated code data can be taught and understood by LLMs. Low-complexity code blocks lack the necessary information, but high-complexity code blocks could be too challenging for LLMs to understand. To effectively improve the reasoning abilities of LLMs, only code data with an appropriate amount of complexity (structure & logic), both basic and detailed, are needed.

In a major breakthrough, scientists have built a tool to predict the odor profile of a molecule, just based on its structure. It can identify molecules that look different but smell the same, as well as molecules that look very similar but smell totally different. The research was published in Science.

Professor Jane Parker, University of Reading, said, “Vision research has wavelength, hearing research has frequency—both can be measured and assessed by instruments. But what about ? We don’t currently have a way to measure or accurately predict the odor of a molecule, based on its .”

“You can get so far with current knowledge of the molecular structure, but eventually you are faced with numerous exceptions where the odor and structure don’t match. This is what has stumped previous models of olfaction. The fantastic thing about this new ML generated model is that it correctly predicts the odor of those exceptions.”

But most deep learning models are loosely based on the brain’s inner workings. AI agents are increasingly endowed with human-like decision-making algorithms. The idea that machine intelligence could become sentient one day no longer seems like science fiction.

How could we tell if machine brains one day gained sentience? The answer may be based on our own brains.

A preprint paper authored by 19 neuroscientists, philosophers, and computer scientists, including Dr. Robert Long from the Center for AI Safety and Dr. Yoshua Bengio from the University of Montreal, argues that the neurobiology of consciousness may be our best bet. Rather than simply studying an AI agent’s behavior or responses—for example, during a chat—matching its responses to theories of human consciousness could provide a more objective ruler.

Apple may not be as flashy as other companies in adopting artificial intelligence features. Still, the already has a lot of smarts scattered throughout iOS.

Apple does not go out of its way to specifically name-drop “artificial intelligence” or AI meaningfully, but the company isn’t avoiding the technology. Machine learning has become Apple’s catch-all for its AI initiatives.

Apple uses artificial intelligence and machine learning in iOS in several noticeable ways. Here is a quick breakdown of where you’ll find it.

Rice University scientists are starting small as they begin to figure out how to build an artificial brain from the bottom up.

Electrical and computer engineer Jacob Robinson of Rice’s Brown School of Engineering and Celina Juliano, an assistant professor of molecular and cellular biology at the University of California, Davis, have won a $1 million Keck Foundation grant to advance the team’s synthetic neurobiology effort to define the connections between neurons and muscles that drive programmed behaviors in living animals.

To begin with, Robinson and his colleagues are putting their faith in a very small animal, the freshwater cnidarian Hydra vulgaris, a tiny tentacled creature that has long been a focus of study in the Robinson and Juliano labs. Because they are small, squishy and transparent, they’re easy to manipulate and measure through Robinson’s custom microfluidic platforms.

When Taylor Webb played around with GPT-3 in early 2022, he was blown away by what OpenAI’s large language model appeared to be able to do. Here was a neural network trained only to predict the next word in a block of text —a jumped-up autocomplete. And yet it gave correct answers to many of the abstract problems that Webb set for it—the kind of thing you’d find in an IQ test. “I was really shocked by its ability to solve these problems,” he says. “It completely upended everything I would have predicted.”

Webb is a psychologist at the University of California, Los Angeles, who studies the different ways people and computers solve abstract problems. He was used to building neural networks that had specific reasoning capabilities bolted on. But GPT-3 seemed to have learned them for free.

It’s the first Big Tech firm to publicly launch one, after a group of them pledged to develop them at the White House in July.

The tool, called SynthID, will initially be available only to users of Google’s AI image generator Imagen, which is hosted on Google Cloud’s machine learning platform Vertex. Users will be able to generate images using Imagen and then choose whether to add a watermark or not. The hope is that it could help people tell when AI-generated content is being passed off as real, or help protect copyright.