Toggle light / dark theme

Now its building one that even bigger and even more sophisticated.

Nearly five years ago, a little-known company approached Microsoft with a special request to put together computing horsepower to the scale it had never done before. Microsoft then spent millions of dollars in putting together tens of thousands of powerful chips to build a supercomputer. OpenAI used this to train its large language model, GPT, and the rest, as they say, is history.

Microsoft is no stranger to building artificial intelligence (AI) models that help users work more efficiently. The automatic spell checker that has helped millions of users is an example of an AI model trained on language.


Microsoft.

How Microsoft put together a supercomputer for OpenAI.

How will superhuman artificial intelligence (AI) affect human decision-making? And what will be the mechanisms behind this effect? We address these questions in a domain where AI already exceeds human performance, analyzing more than 5.8 million move decisions made by professional Go players over the past 71 y (1950 to 2021). To address the first question, we use a superhuman AI program to estimate the quality of human decisions across time, generating 58 billion counterfactual game patterns and comparing the win rates of actual human decisions with those of counterfactual AI decisions. We find that humans began to make significantly better decisions following the advent of superhuman AI. We then examine human players’ strategies across time and find that novel decisions (i.e., previously unobserved moves) occurred more frequently and became associated with higher decision quality after the advent of superhuman AI. Our findings suggest that the development of superhuman AI programs may have prompted human players to break away from traditional strategies and induced them to explore novel moves, which in turn may have improved their decision-making.

The fruit fly larva connectome showed circuit features that were strikingly reminiscent of prominent and powerful machine learning architectures. “Some of the architectural features observed in the Drosophila larval brain, including multilayer shortcuts and prominent nested recurrent loops, are found in state-of-the-art artificial neural networks, where they can compensate for a lack of network depth and support arbitrary, task-dependent computations,” they wrote. The team expects continued study will reveal even more computational principles and potentially inspire new artificial intelligence systems. “What we learned about code for fruit flies will have implications for the code for humans,” Vogelstein said. “That’s what we want to understand—how to write a program that leads to a human brain network.”

While attending an event called AI in Focus — Digital Kickoff, Chief Technology Officer at Microsoft Germany, Andreas Braun, spoke about GPT-4 and its upcoming unveiling (via Heise). According to Braun, the next iteration of GPT will be shown off next week and it will allow users to create new types of AI-generated content.

We will introduce GPT-4 next week, where we have multimodal models that will offer completely different possibilities – for example, videos.

A new concept called organoid intelligence, with the aim of developing a new generation of biocomputers, has recently been detailed by a group of researchers. They want to harness advances in the reproduction of human brain cells in vitro to offer superior intelligence to the computers and smart devices of the future. This technology promises to be much more powerful and efficient than any form of artificial intelligence as we know it.

This notion of organoid intelligence is described in a paper outlining a roadmap to developing this technology published in the journal Frontiers of Science, by numerous scientists, mainly from Johns Hopkins University in Baltimore. According to them, work on cerebral organoids, derived from human stem cells, should make it possible in the relatively near future to reproduce entities endowed with memory and a genuine capacity for learning. Organoids are miniature organs grown in vitro. The term organoid intelligence (OI) encompasses all these developments, leading to a form of biological computing — or biocomputing — that leverages neurons bred in a lab. All of which is enough to make the likes of ChatGPT seem outdated already.

Complex interfaces could eventually be networked, with brain organoids connected to sensory organoids such as retinal organoids. This could, for example, lead to new therapeutic applications.

We don’t learn by brute force repetition. AI shouldn’t either.


Despite impressive progress, today’s AI models are very inefficient learners, taking huge amounts of time and data to solve problems humans pick up almost instantaneously. A new approach could drastically speed things up by getting AI to read instruction manuals before attempting a challenge.

One of the most promising approaches to creating AI that can solve a diverse range of problems is reinforcement learning, which involves setting a goal and rewarding the AI for taking actions that work towards that goal. This is the approach behind most of the major breakthroughs in game-playing AI, such as DeepMind’s AlphaGo.

As powerful as the technique is, it essentially relies on trial and error to find an effective strategy. This means these algorithms can spend the equivalent of several years blundering through video and board games until they hit on a winning formula.