Toggle light / dark theme

Since ChatGPT burst onto the scene in late 2022, generative artificial intelligence (GenAI) models have been vying for the lead — with the U.S. and China hotbeds for the technology.

GenAI tools are able to create images, videos, or written works as well as answer questions or tend to online tasks based on simple prompts.

These AI assistants stand out for their popularity and sophistication.

Japanese tech startup Sakana AI has written the first AI-generated peer-reviewed scientific paper.

Its AI program, AI Scientist-v2, successfully passed a peer review process, and the paper was accepted at a workshop during the International Conference on Learning Representations (ICLR), a major event in AI.

The AI-generated paper underwent the same review process as human submissions, with Sakana AI collaborating with the University of British Columbia and the University of Oxford.

The agricultural sector in South Africa is undergoing a transformation with the introduction of AI-powered harvesting robots. These advanced machines are set to revolutionize farming by increasing efficiency, reducing labor costs, and ensuring better crop yields. With the growing challenges of climate change, labor shortages, and the need for sustainable farming, AI-driven technology is emerging as a critical solution for modern agriculture.

Artificial intelligence has become a vital tool in various industries, and agriculture is no exception. AI-powered robots are designed to perform labor-intensive tasks such as planting, watering, monitoring crop health, and harvesting. These machines utilize machine learning, computer vision, and sensor technology to identify ripe crops, pick them with precision, and minimize waste.

In South Africa, where agricultural labor shortages and rising costs have posed challenges to farmers, AI-driven automation is proving to be a game-changer. With an estimated 8.5% of the country’s workforce employed in agriculture, technological advancements can significantly improve productivity while alleviating labor constraints.

In this insightful conversation with OpenAI’s CPO Kevin Weil, we discuss the rapid acceleration of AI and its implications. Kevin makes the shocking prediction that coding will be fully automated THIS YEAR, not by 2027, as others suggest. He explains how OpenAI’s models are already ranking among the world’s top programmers and shares his thoughts on Deep Research, GPT-4.5’s human-like qualities, the future of jobs, and the timeline for GPT-5. Don’t miss Kevin’s billion-dollar startup idea and his vision for how AI will transform education and democratize software creation.

00:00 — Summary.
01:21 — Introduction.
03:20 — Discussion on OpenAI being both a research and product company.
11:05 — Timeline for GPT-5
11:38 — AI model commoditization and maintaining competitive advantage.
15:09 — Deep Research capabilities.
24:22 — Coding automation prediction: THIS YEAR
30:05 — AI in creative work and design.
36:43 — Future of programming and engineers.
38:32 — Will AI create new job categories?
40:58 — Billion-dollar AI startup ideas.
46:27 — Voice interfaces and robotics.
49:28 — Closing thoughts.

For decades, the Turing Test was considered the ultimate benchmark to determine whether computers could match human intelligence. Created in 1950, the “imitation game”, as Alan Turing called it, required a machine to carry out a text-based chat in a way that was indistinguishable from a human. It was thought that any machine able to pass the Turing Test would be capable of demonstrating reasoning, autonomy, and maybe even consciousness – meaning it could be considered human-level artificial intelligence, also known as artificial general intelligence (AGI).

The arrival of ChatGPT ruined this notion, as it was able to convincingly pass the test through what was essentially an advanced form of pattern recognition. It could imitate, but not replicate.

Last week, a new AI agent called Manus once again tested our understanding of AGI. The Chinese researchers behind it describe it as the “world’s first fully autonomous AI”, able to perform complex tasks like booking holidays, buying property or creating podcasts – without any human guidance. Yichao Ji, who led its development at the Wuhan-based startup Butterfly Effect, says it “bridges the gap between conception and execution” and is the “next paradigm” for AI.

The best answer will be unsettling to both the hard skeptics of AI and the true believers.

Let’s take a step back. What exactly is reasoning, anyway?

AI companies like OpenAI are using the term reasoning to mean that their models break down a problem into smaller problems, which they tackle step by step, ultimately arriving at a better solution as a result.

The mergers of massive black hole binaries could generate rich electromagnetic emissions, which allow us to probe the environments surrounding these massive black holes and gain deeper insights into the high energy astrophysics. However, due to the short timescale of binary mergers, it is crucial to predict the time of the merger in advance to devise detailed observational plans. The overwhelming noise and slow accumulation of the signal-to-noise ratio in the inspiral phase make this task particularly challenging. To address this issue, we propose a novel deep neural denoising network in this study, capable of denoising a 30-day inspiral phase signal. Following the denoising process, we perform the detection and merger time prediction based on the denoised signals.

Contemplate a future where tiny, energy-efficient brain-like networks guide autonomous machines—like drones or robots—through complex environments. To make this a reality, scientists are developing ultra-compact communication systems where light, rather than electricity, carries information between nanoscale devices.

In this study, researchers achieved a breakthrough by enabling direct on-chip communication between tiny light-sensing devices called InP nanowire photodiodes on a silicon chip. This means that light can now travel efficiently from one nanoscale component to another, creating a faster and more energy-efficient network. The system proved robust, handling signals with up to 5-bit resolution, which is similar to the information-processing levels in biological neural networks. Remarkably, it operates with minimal energy—just 0.5 microwatts, which is lower than what conventional hardware needs.

S a quadrillionth of a joule!) and allow one emitter to communicate with hundreds of other nodes simultaneously. This efficient, scalable design meets the requirements for mimicking biological neural activity, especially in tasks like autonomous navigation. + In essence, this research moves us closer to creating compact, light-powered neural networks that could one day drive intelligent machines, all while saving space and energy.

Based on how an AI model transcribes audio into text, the researchers behind the study could map brain activity that takes place during conversation more accurately than traditional models that encode specific features of language structure — such as phonemes (the simple sounds that make up words) and parts of speech (such as nouns, verbs and adjectives).

The model used in the study, called Whisper, instead takes audio files and their text transcripts, which are used as training data to map the audio to the text. It then uses the statistics of that mapping to “learn” to predict text from new audio files that it hasn’t previously heard.