Toggle light / dark theme

“For AI to be motivated towards a goal, it must know what it wants.”

The possible board combinations in a game of Go are more than the number of atoms in the known universe, but it’s still a finite number. In the real world, there are infinite possibilities for what might happen next, and uncertainty is rampant. How realistic, then, is AGI?

A recent research paper published in Frontiers in Ecology and Evolution explores obstacles toward AGI. Biological systems with degrees of general intelligence — organisms ranging from the humble microbes to the humans reading this — are capable of improvising to meet their goals. What prevents AI from improvising?

Data serves as the foundation of today’s biotechnology and pharmaceutical industries, and that foundation keeps expanding. “The appreciation of the value of data and need for quality data has grown in recent years,” says Anastasia Christianson, PhD, vice president and global head of AI, machine learning and data, Pfizer. She notes that the concept of FAIR data—Findable, Accessible, Interoperable, and Reusable data1—is becoming more widely accepted and more closely achieved.

Part of the transition in data use arises almost philosophically. “There has been a cultural shift or mindset change from data management for the purpose of storage and archiving to data management for the purpose of data analysis and reuse,” Christianson explains. “This is probably the most significant advance. The exponential growth of analytics capabilities and artificial intelligence have probably raised both the expectations for and appreciation of the value of data and the need for good data management and data quality.”

In a paper published in Nature Photonics, researchers from the University of Oxford, along with collaborators from the Universities of Muenster, Heidelberg, and Exeter, report on their development of integrated photonic-electronic hardware capable of processing three-dimensional (3D) data, substantially boosting data processing parallelism for AI tasks.

Conventional computer chip processing efficiency doubles every 18 months, but the required by modern AI tasks is currently doubling around every 3.5 months. This means that new computing paradigms are urgently needed to cope with the rising demand.

One approach is to use light instead of electronics—this allows multiple calculations to be carried out in parallel using different wavelengths to represent different sets of data. Indeed, in ground breaking work published in the journal Nature in 2021, many of the same authors demonstrated a form of integrated photonic processing chip that could carry out matrix vector multiplication (a crucial task for AI and machine learning applications) at speeds far outpacing the fastest electronic approaches. This work resulted in the birth of the photonic AI company, Salience Labs, a spin-out from the University of Oxford.

Summary: A new study delves into the enigmatic realm of deep neural networks, discovering that while these models can identify objects akin to human sensory systems, their recognition strategies diverge from human perception. When prompted to generate stimuli similar to a given input, the networks often produced unrecognizable or distorted images and sounds.

This indicates that neural networks cultivate their distinct “invariances”, differing starkly from human perceptual patterns. The research offers insights into evaluating models that mimic human sensory perceptions.

Rumors of Agi 2025 runnin wild.

Screen Cap It!


Late last year, around the time ChatGPT became a global sensation, the engineers at OpenAI began working on a new artificial intelligence model, codenamed Arrakis.

Although OpenAI was preparing to boost ChatGPT with a different model, now known as GPT-4, which it had completed earlier in the year, the upcoming Arrakis model would allow the company to run the chatbot less expensively. Success with Arrakis would also help OpenAI show Microsoft how fast it could create successive large language models, which would be valuable as the two firms finished negotiating a $10 billion investment and product deal.

But by the middle of 2023, OpenAI had scrapped the Arrakis launch after the model didn’t run as efficiently as the company expected, according to people with knowledge of the situation.