Toggle light / dark theme

Blake Lemoine reached his conclusion after conversing since last fall with LaMDA, Google’s artificially intelligent chatbot generator, what he calls part of a “hive mind.” He was supposed to test if his conversation partner used discriminatory language or hate speech.

As he and LaMDA messaged each other recently about religion, the AI talked about “personhood” and “rights,” he told The Washington Post.

It was just one of the many startling “talks” Lemoine has had with LaMDA. He has linked on Twitter to one — a series of chat sessions with some editing (which is marked).

“Throughout our history, we’ve always had to find ways to stay ahead,” Kim told Rest of World. “Automation is the next step in that process.”

Speefox’s factory is 75% automated, representing South Korea’s continued push away from human labor. Part of that drive is labor costs: South Korea’s minimum wage has climbed, rising 5% just this year.

But the most recent impetus is legal liability for worker death or injury. In January, a law came into effect called the Serious Disasters Punishment Act, which says, effectively, that if workers die or sustain serious injuries on the job, and courts determine that the company neglected safety standards, the CEO or high-ranking managers could be fined or go to prison.

A team of researchers affiliated with multiple institutions in the U.S., including Google Quantum AI, and a colleague in Australia, has developed a theory suggesting that quantum computers should be exponentially faster on some learning tasks than classical machines. In their paper published in the journal Science, the group describes their theory and results when tested on Google’s Sycamore quantum computer. Vedran Dunjko with Leiden University City has published a Perspective piece in the same journal issue outlining the idea behind combining quantum computing with machine learning to provide a new level of computer-based learning systems.

Machine learning is a system by which computers trained with datasets make informed guesses about new data. And quantum computing involves using sub-atomic particles to represent qubits as a means for conducting applications many times faster than is possible with . In this new effort, the researchers considered the idea of running machine-learning applications on quantum computers, possibly making them better at learning, and thus more useful.

To find out if the idea might be possible, and more importantly, if the results would be better than those achieved on classical computers, the researchers posed the problem in a novel way—they devised a task that would learn via experiments repeated many times over. They then developed theories describing how a quantum system could be used to conduct such experiments and to learn from them. They found that they were able to prove that a quantum could do it, and that it could do it much better than a classical system. In fact, they found a reduction in the required number of experiments needed to learn a concept to be four orders of magnitude lower than for classical systems. The researchers then built such a system and tested it on Google’s Sycamore quantum computer and confirmed their theory.

Abstract: Superintelligence, the next phase beyond today’s narrow AI and tomorrow’s AGI, almost intrinsically evades our attempts at detailed comprehension. Yet very different perspectives on superintelligence exist today and have concrete influence on thinking about matters ranging from AGI architectures to technology regulation.
One paradigm considers superintelligences as resembling modern deep reinforcement learning systems, obsessively concerned with optimizing particular goal functions. Another considers superintelligences as open-ended, complex evolving systems, ongoingly balancing drives.
toward individuation and radical self-transcendence in a paraconsistent way. In this talk I will argue that the open-ended conception of superintelligence is both more desirable and more realistic, and will discuss how concrete work being done today on projects like OpenCog Hyperon, SingularityNET and Hypercycle potentially paves the way for a path through beneficial decentralized integrative AGI and on to open-ended superintelligence and ultimately the Singularity.

Bio: In May 2007, Goertzel spoke at a Google tech talk about his approach to creating artificial general intelligence. He defines intelligence as the ability to detect patterns in the world and in the agent itself, measurable in terms of emergent behavior of “achieving complex goals in complex environments”. A “baby-like” artificial intelligence is initialized, then trained as an agent in a simulated or virtual world such as Second Life to produce a more powerful intelligence. Knowledge is represented in a network whose nodes and links carry probabilistic truth values as well as “attention values”, with the attention values resembling the weights in a neural network. Several algorithms operate on this network, the central one being a combination of a probabilistic inference engine and a custom version of evolutionary programming.

This talk is part of the ‘Stepping Into the Future‘conference. http://www.scifuture.org/open-ended-vs-closed-minded-concept…elligence/

Many thanks for tuning in!

The way she uses dots and strokes looks a bit like ones and zeroes.


Artificial intelligence is playing a huge role in the development of all kinds of technologies. It can be combined with deep learning techniques to do amazing things that have the potential to improve all our lives. Things like learning how to safely control nuclear fusion (opens in new tab), or making delicious pizzas (opens in new tab).

One of the many questions surrounding AI is its use in art. There’s no denying AI can have some amazing abilities when it comes to producing images. Nvidia’s GuaGan2 that can take words and turn them into photorealistic pictures (opens in new tab) is one example of this. Or Ubisoft’s ZooBuilder AI (opens in new tab), which is a prototype for animating animals.

There’s a movement afoot to counter the dystopian and apocalyptic narratives of artificial intelligence. Some people in the field are concerned that the frequent talk of AI as an existential risk to humanity is poisoning the public against the technology and are deliberately setting out more hopeful narratives. One such effort is a book that came out last fall called AI 2041: Ten Visions for Our Future.

The book is cowritten by Kai-Fu Lee, an AI expert who leads the venture capital firm Sinovation Ventures, and Chen Qiufan, a science fiction author known for his novel Waste Tide. It has an interesting format. Each chapter starts with a science fiction story depicting some aspect of AI in society in the year 2041 (such as deepfakes, self-driving cars, and AI-enhanced education), which is followed by an analysis section by Lee that talks about the technology in question and the trends today that may lead to that envisioned future. It’s not a utopian vision, but the stories generally show humanity grappling productively with the issues raised by ever-advancing AI.

IEEE Spectrum spoke to Lee about the book, focusing on the last few chapters, which take on the big issues of job displacement, the need for new economic models, and the search for meaning and happiness in an age of abundance. Lee argues that technologists need to give serious thought to such societal impacts, instead of thinking only about the technology.

One of the most tedious, daunting tasks for undergraduate assistants in university research labs involves looking hours on end through a microscope at samples of material, trying to find monolayers.

These —less than 1/100,000th the width of a human hair—are highly sought for use in electronics, photonics, and because of their unique properties.

“Research labs hire armies of undergraduates to do nothing but look for monolayers,” says Jaime Cardenas, an assistant professor of optics at the University of Rochester. “It’s very tedious, and if you get tired, you might miss some of the monolayers or you might start making misidentifications.”

“I think it is possible,” Musk, 50, recently told Insider. “Yes, we could download the things that we believe make ourselves so unique. Now, of course, if you’re not in that body anymore, that is definitely going to be a difference, but as far as preserving our memories, our personality, I think we could do that.”

By Musk’s account, such technology will be a gradual evolution from today’s forms of computer memory. “Our memories are stored in our phones and computers with pictures and video,” he said. “Computers and phones amplify our ability to communicate, enabling us to do things that would have been considered magical … We’ve already amplified our human brains massively with computers.”

The concept of prolonging human life by downloading consciousnesses into synthetic bodies has been a fixture of science-fiction for decades, with the 1964 sci-fi novel “Dune” terming such beings as “cymeks.” Some experts today believe that “mind uploading” technology could, in fact, be feasible one day — but the timeline is incredibly unclear.