Toggle light / dark theme

There’s a movement afoot to counter the dystopian and apocalyptic narratives of artificial intelligence. Some people in the field are concerned that the frequent talk of AI as an existential risk to humanity is poisoning the public against the technology and are deliberately setting out more hopeful narratives. One such effort is a book that came out last fall called AI 2041: Ten Visions for Our Future.

The book is cowritten by Kai-Fu Lee, an AI expert who leads the venture capital firm Sinovation Ventures, and Chen Qiufan, a science fiction author known for his novel Waste Tide. It has an interesting format. Each chapter starts with a science fiction story depicting some aspect of AI in society in the year 2041 (such as deepfakes, self-driving cars, and AI-enhanced education), which is followed by an analysis section by Lee that talks about the technology in question and the trends today that may lead to that envisioned future. It’s not a utopian vision, but the stories generally show humanity grappling productively with the issues raised by ever-advancing AI.

IEEE Spectrum spoke to Lee about the book, focusing on the last few chapters, which take on the big issues of job displacement, the need for new economic models, and the search for meaning and happiness in an age of abundance. Lee argues that technologists need to give serious thought to such societal impacts, instead of thinking only about the technology.

One of the most tedious, daunting tasks for undergraduate assistants in university research labs involves looking hours on end through a microscope at samples of material, trying to find monolayers.

These —less than 1/100,000th the width of a human hair—are highly sought for use in electronics, photonics, and because of their unique properties.

“Research labs hire armies of undergraduates to do nothing but look for monolayers,” says Jaime Cardenas, an assistant professor of optics at the University of Rochester. “It’s very tedious, and if you get tired, you might miss some of the monolayers or you might start making misidentifications.”

“I think it is possible,” Musk, 50, recently told Insider. “Yes, we could download the things that we believe make ourselves so unique. Now, of course, if you’re not in that body anymore, that is definitely going to be a difference, but as far as preserving our memories, our personality, I think we could do that.”

By Musk’s account, such technology will be a gradual evolution from today’s forms of computer memory. “Our memories are stored in our phones and computers with pictures and video,” he said. “Computers and phones amplify our ability to communicate, enabling us to do things that would have been considered magical … We’ve already amplified our human brains massively with computers.”

The concept of prolonging human life by downloading consciousnesses into synthetic bodies has been a fixture of science-fiction for decades, with the 1964 sci-fi novel “Dune” terming such beings as “cymeks.” Some experts today believe that “mind uploading” technology could, in fact, be feasible one day — but the timeline is incredibly unclear.

Google’s AI division is creating digital versions of – normally hand-drawn – maps of electricity cables, in a move that could benefit the global utility industry.

The firm’s DeepMind engineers have partnered with UK Power Networks which delivers electricity across London, the East and South East, to create digital versions of maps covering more than 180,000km of electricity cables.

The work involves new image recognition software scanning thousands of maps – some of which date back decades – and automatically remastering them into a digital format for future use.

Serhii Pospielov is Lead Software Engineer at Exadel. Serhii has more than a decade of developer and engineering experience. Prior to joining Exadel he was a game developer at Mayplay Games. He holds a Master’s Degree in Computer Software Engineering from Donetsk National Technical University.

Get the latest industry news, expert insights and market research tailored to your interests!

These new, more diverse approaches to training AI let it adapt to different play-styles, to make it a better team mate.


DeepMind researchers have been using the chaotic cooking game Overcooked (opens in new tab) to teach AI to better collaborate with humans. MIT researchers have followed suit, gifting their AI the ability to distinguish between a diverse range of play-styles. What’s amazing is that it’s actually working—the humans involved actually preferred playing with the AI.

Have you ever been dropped into a game with strangers only to find their play-style totally upends your own? There’s a reason we’re better at gaming with people we know—they get us. As a team, you make a point of complementing each other’s play-style so you can cover all bases, and win.

By making remarkable breakthroughs in a number of fields, unlocking new approaches to science, and accelerating the pace of science and innovation.


In 2020, Google’s AI team DeepMind announced that its algorithm, AlphaFold, had solved the protein-folding problem. At first, this stunning breakthrough was met with excitement from most, with scientists always ready to test a new tool, and amusement by some. After all, wasn’t this the same company whose algorithm AlphaGo had defeated the world champion in the Chinese strategy game Go, just a few years before? Mastering a game more complex than chess, difficult as that is, felt trivial compared to the protein-folding problem. But AlphaFold proved its scientific mettle by sweeping an annual competition in which teams of biologists guess the structure of proteins based only on their genetic code. The algorithm far outpaced its human rivals, posting scores that predicted the final shape within an angstrom, the width of a single atom. Soon after, AlphaFold passed its first real-world test by correctly predicting the shape of the SARS-CoV-2 ‘spike’ protein, the virus’ conspicuous membrane receptor that is targeted by vaccines.

The success of AlphaFold soon became impossible to ignore, and scientists began trying out the algorithm in their labs. By 2021 Science magazine crowned an open-source version of AlphaFold the “Method of the Year.” Biochemist and Editor-in-Chief H. Holden Thorp of the journal Science wrote in an editorial, “The breakthrough in protein-folding is one of the greatest ever in terms of both the scientific achievement and the enabling of future research.” Today, AlphaFold’s predictions are so accurate that the protein-folding problem is considered solved after more than 70 years of searching. And while the protein-folding problem may be the highest profile achievement of AI in science to date, artificial intelligence is quietly making discoveries in a number of scientific fields.

By turbocharging the discovery process and providing scientists with new investigative tools, AI is also transforming how science is done. The technology upgrades research mainstays like microscopes and genome sequencers 0, adding new technical capacities to the instruments and making them more powerful. AI-powered drug design and gravity wave detectors offer scientists new tools to probe and control the natural world. Off the lab bench, AI can also deploy advanced simulation capabilities and reasoning systems to develop real-world models and test hypotheses using them. With manifold impacts stretching the length of the scientific method, AI is ushering in a scientific revolution through groundbreaking discoveries, novel techniques and augmented tools, and automated methods that advance the speed and accuracy of the scientific process.

😲


SAN FRANCISCO — Google engineer Blake Lemoine opened his laptop to the interface for LaMDA, Google’s artificially intelligent chatbot generator, and began to type.

“Hi LaMDA, this is Blake Lemoine …,” he wrote into the chat screen, which looked like a desktop version of Apple’s iMessage, down to the Arctic blue text bubbles. LaMDA, short for Language Model for Dialogue Applications, is Google’s system for building chatbots based on its most advanced large language models, so called because it mimics speech by ingesting trillions of words from the internet.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-kid kid that happens to know physics,” said Lemoine, 41.

Galicia é, segundo os autores da antigüidade, unha das primeiras sociedades matriarcais.
do mundo. AI-LALELO é unha homenaxe ás mulleres que ergueron o país e conseguiron.
manter a nosa tradición, lingua e cultura propias ata hoxe. Raíz e modernidade fúndense.
nun novo son, unha canción baseada nas típicas cantigas tradicionais galegas pero cunha.
pinga futurista grazas á intelixencia artificial e cunha mensaxe valiosa para as vindeiras.
xeracións: Soamente nós podemos conseguir que a nosa identidade como pobo siga.
viva.

Proxecto/Project:
Ana María Prieto.

Dirixido por/directed by:
Joel Cava.

Producido por/produced by: