Toggle light / dark theme

How do we know when AI becomes conscious and deserves rights?

Machines becoming conscious, self-aware, and having feelings would be an extraordinary threshold. We would have created not just life, but conscious beings.

There has already been massive debate about whether that will ever happen. While the discussion is largely about supra-human intelligence, that is not the same thing as consciousness.

Now the massive leaps in quality of AI conversational bots is leading some to believe that we have passed that threshold and the AI we have created is already sentient.

A new brain-inspired intelligent system can drive a car using only 19 control neurons!

Read the article:► https://medium.com/towards-artificial-intelligence/a-new-bra…d127107db9
Paper:► https://www.nature.com/articles/s42256-020-00237-3.epdf.
Watch MIT’s video:► https://www.youtube.com/watch?v=8KBOf7NJh4Y&feature=emb_titl…l=MITCSAIL
GitHub:► https://github.com/mlech26l/keras-ncp.
Colab tutorials:
The basics of Neural Circuit Policies:► https://colab.research.google.com/drive/1IvVXVSC7zZPo5w-PfL3…sp=sharing.
How to stack NCP with other types of layers:► https://colab.research.google.com/drive/1-mZunxqVkfZVBXNPG0k…sp=sharing.

Follow me for more AI content:
Instagram: https://www.instagram.com/whats_ai/
LinkedIn: https://www.linkedin.com/in/whats-ai/
Twitter: https://twitter.com/Whats_AI
Facebook: https://www.facebook.com/whats.artificial.intelligence/
Medium: https://medium.com/@whats_ai.

The best courses to start and progress in AI:
https://www.omologapps.com/whats-ai.

Join Our Discord channel, Learn AI Together:
https://discord.gg/learnaitogether.

Support me on patreon:

Google Engineer On Leave After He Claims AI Program Has Gone Sentient

Blake Lemoine reached his conclusion after conversing since last fall with LaMDA, Google’s artificially intelligent chatbot generator, what he calls part of a “hive mind.” He was supposed to test if his conversation partner used discriminatory language or hate speech.

As he and LaMDA messaged each other recently about religion, the AI talked about “personhood” and “rights,” he told The Washington Post.

It was just one of the many startling “talks” Lemoine has had with LaMDA. He has linked on Twitter to one — a series of chat sessions with some editing (which is marked).

Fearing lawsuits, factories rush to replace humans with robots in South Korea

“Throughout our history, we’ve always had to find ways to stay ahead,” Kim told Rest of World. “Automation is the next step in that process.”

Speefox’s factory is 75% automated, representing South Korea’s continued push away from human labor. Part of that drive is labor costs: South Korea’s minimum wage has climbed, rising 5% just this year.

But the most recent impetus is legal liability for worker death or injury. In January, a law came into effect called the Serious Disasters Punishment Act, which says, effectively, that if workers die or sustain serious injuries on the job, and courts determine that the company neglected safety standards, the CEO or high-ranking managers could be fined or go to prison.

Theory suggests quantum computers should be exponentially faster on some learning tasks than classical machines

A team of researchers affiliated with multiple institutions in the U.S., including Google Quantum AI, and a colleague in Australia, has developed a theory suggesting that quantum computers should be exponentially faster on some learning tasks than classical machines. In their paper published in the journal Science, the group describes their theory and results when tested on Google’s Sycamore quantum computer. Vedran Dunjko with Leiden University City has published a Perspective piece in the same journal issue outlining the idea behind combining quantum computing with machine learning to provide a new level of computer-based learning systems.

Machine learning is a system by which computers trained with datasets make informed guesses about new data. And quantum computing involves using sub-atomic particles to represent qubits as a means for conducting applications many times faster than is possible with . In this new effort, the researchers considered the idea of running machine-learning applications on quantum computers, possibly making them better at learning, and thus more useful.

To find out if the idea might be possible, and more importantly, if the results would be better than those achieved on classical computers, the researchers posed the problem in a novel way—they devised a task that would learn via experiments repeated many times over. They then developed theories describing how a quantum system could be used to conduct such experiments and to learn from them. They found that they were able to prove that a quantum could do it, and that it could do it much better than a classical system. In fact, they found a reduction in the required number of experiments needed to learn a concept to be four orders of magnitude lower than for classical systems. The researchers then built such a system and tested it on Google’s Sycamore quantum computer and confirmed their theory.

Ben Goertzel — Open Ended vs Closed Minded Conceptions of Superintelligence

Abstract: Superintelligence, the next phase beyond today’s narrow AI and tomorrow’s AGI, almost intrinsically evades our attempts at detailed comprehension. Yet very different perspectives on superintelligence exist today and have concrete influence on thinking about matters ranging from AGI architectures to technology regulation.
One paradigm considers superintelligences as resembling modern deep reinforcement learning systems, obsessively concerned with optimizing particular goal functions. Another considers superintelligences as open-ended, complex evolving systems, ongoingly balancing drives.
toward individuation and radical self-transcendence in a paraconsistent way. In this talk I will argue that the open-ended conception of superintelligence is both more desirable and more realistic, and will discuss how concrete work being done today on projects like OpenCog Hyperon, SingularityNET and Hypercycle potentially paves the way for a path through beneficial decentralized integrative AGI and on to open-ended superintelligence and ultimately the Singularity.

Bio: In May 2007, Goertzel spoke at a Google tech talk about his approach to creating artificial general intelligence. He defines intelligence as the ability to detect patterns in the world and in the agent itself, measurable in terms of emergent behavior of “achieving complex goals in complex environments”. A “baby-like” artificial intelligence is initialized, then trained as an agent in a simulated or virtual world such as Second Life to produce a more powerful intelligence. Knowledge is represented in a network whose nodes and links carry probabilistic truth values as well as “attention values”, with the attention values resembling the weights in a neural network. Several algorithms operate on this network, the central one being a combination of a probabilistic inference engine and a custom version of evolutionary programming.

This talk is part of the ‘Stepping Into the Future‘conference. http://www.scifuture.org/open-ended-vs-closed-minded-concept…elligence/

Many thanks for tuning in!

Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: https://docs.google.com/forms/d/1mr9PIfq2ZYlQsXRIn5BcLH2onbiSI7g79mOH_AFCdIk/
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture b) Donating.
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b.
- Patreon: https://www.patreon.com/scifuture c) Sharing the media SciFuture creates.

Kind regards.

AI robot painter holds an exhibition and her art is really cool

The way she uses dots and strokes looks a bit like ones and zeroes.


Artificial intelligence is playing a huge role in the development of all kinds of technologies. It can be combined with deep learning techniques to do amazing things that have the potential to improve all our lives. Things like learning how to safely control nuclear fusion (opens in new tab), or making delicious pizzas (opens in new tab).

One of the many questions surrounding AI is its use in art. There’s no denying AI can have some amazing abilities when it comes to producing images. Nvidia’s GuaGan2 that can take words and turn them into photorealistic pictures (opens in new tab) is one example of this. Or Ubisoft’s ZooBuilder AI (opens in new tab), which is a prototype for animating animals.

AI’s Threats to Jobs and Human Happiness Are Very Real

There’s a movement afoot to counter the dystopian and apocalyptic narratives of artificial intelligence. Some people in the field are concerned that the frequent talk of AI as an existential risk to humanity is poisoning the public against the technology and are deliberately setting out more hopeful narratives. One such effort is a book that came out last fall called AI 2041: Ten Visions for Our Future.

The book is cowritten by Kai-Fu Lee, an AI expert who leads the venture capital firm Sinovation Ventures, and Chen Qiufan, a science fiction author known for his novel Waste Tide. It has an interesting format. Each chapter starts with a science fiction story depicting some aspect of AI in society in the year 2041 (such as deepfakes, self-driving cars, and AI-enhanced education), which is followed by an analysis section by Lee that talks about the technology in question and the trends today that may lead to that envisioned future. It’s not a utopian vision, but the stories generally show humanity grappling productively with the issues raised by ever-advancing AI.

IEEE Spectrum spoke to Lee about the book, focusing on the last few chapters, which take on the big issues of job displacement, the need for new economic models, and the search for meaning and happiness in an age of abundance. Lee argues that technologists need to give serious thought to such societal impacts, instead of thinking only about the technology.

/* */