Toggle light / dark theme

From Human to Artificial General Intelligence

Humans have an almost unbounded set of skills and knowledge, and quickly learn new information without needing to be re-engineered to do so. It is conceivable that an AGI can be built using an approach that is fundamentally different from human intelligence. However, as three longtime researchers in AI and cognitive science, our approach is to draw inspiration and insights from the structure of the human mind. We are working toward AGI by trying to better understand the human mind, and better understand the human mind by working toward AGI.

From research in neuroscience, cognitive science, and psychology, we know that the human brain is neither a huge homogeneous set of neurons nor a massive set of task-specific programs that each solves a single problem. Instead, it is a set of regions with different properties that support the basic cognitive capabilities that together form the human mind.

Artificial Intelligence has ushered the advancement of several disciplines throughout the years. But could it ever discover a new form of physics?

A group of roboticists from Columbia University wanted to exploit the vast potential of AI and find out if it can ever find an “alternative physics.”

Hence, they created an AI tool that could recognize physical occurrences and identify pertinent variables, essential building blocks for every physics theory.

Human-like temporal variability in movements is a powerful hint that humans use to ascribe humanness to robots.


A team of researchers at the University of Geneva has found that ketamine is unlikely to be addictive to people who use it for extended periods of time. In their paper published in the journal Nature, the group describes their study of the impact of the synthetic compound on the brains of mice and what they learned about its impact on different brain regions. Rianne Campbell and Mary Kay Lobo, with the University of Maryland School of Medicine have published a News and Views piece in the same journal issue outlining the work done by the team in Switzerland.

In recent years, more vehicles include partially autonomous driving features, such as blind spot detectors, automatic braking and lane sensing, which are said to increase safety. However, a recent study by researchers from The University of Texas at Austin finds that some of that safety benefit may be offset by people driving more, thereby clogging up roads and exposing themselves to more potential crashes.

The study, published recently in Transportation Research Part A—Policy and Practice, found that drivers with one or more of these autonomous features reported higher miles traveled than those of similar profiles who didn’t have them. This is important, because miles traveled is one of the most—if not the most—significant predictor of . The more you drive, the more likely you are to crash.

“What we showed, without any ambiguity in our results, is that after embracing autonomous features, people tend to drive more,” said Chandra Bhat, one of the authors on the project and professor in the Cockrell School of Engineering’s Department of Civil, Architectural and Environmental Engineering. “There are certainly engineering benefits to these features, but they are offset to a good extent because people are driving more and exposed more.”

MIT researchers created protonic programmable resistors — building blocks of analog deep learning systems — that can process data 1 million times faster than synapses in the human brain. These ultrafast, low-energy resistors could enable analog deep learning systems that can train new and more powerful neural networks rapidly, which could be used for areas like self-driving cars, fraud detection, and health care.

SYNOPSIS: Will a computer ever be more creative than a human? In this compelling program, artists, musicians, neuroscientists, and computer scientists explore the future of artistry and imagination in the age of artificial intelligence.

PARTICIPANTS: Sougwen Chung, Jesse Engel, Peter Ulric Tse, Lav Varshney.
MODERATOR: John Schaefer.
Original program date: MAY 31, 2017

WATCH THE TRAILER: https://youtu.be/O6t7I_iVim8
WATCH THE LIVE Q&A W/JESSE ENGEL: https://youtu.be/UXyMiSURQ7Y

FULL DESCRIPTION: Today, there are robots that make art, move like dancers, tell stories, and even help human chefs devise unique recipes. But is there ingenuity in silico? Can computers be creative? A rare treat for the senses, this thought-provoking event brings together artists and computer scientists who are creating original works with the help of artificially intelligent machines. Joined by leading experts in psychology and neuroscience, they’ll explore the roots of creativity in humans and computers, what artificial creativity reveals about human imagination, and the future of hybrid systems that build on the capabilities of both.

MORE INFO ABOUT THE PROGRAM AND PARTICIPANTS: https://www.worldsciencefestival.com/programs/computational-creativity/

This program is part of the Big Ideas Series, made possible with support from the John Templeton Foundation.

Patreon: https://www.patreon.com/mlst.
Discord: https://discord.gg/ESrGqhf5CB

The field of Artificial Intelligence was founded in the mid 1950s with the aim of constructing “thinking machines” — that is to say, computer systems with human-like general intelligence. Think of humanoid robots that not only look but act and think with intelligence equal to and ultimately greater than that of human beings. But in the intervening years, the field has drifted far from its ambitious old-fashioned roots.

Dr. Ben Goertzel is an artificial intelligence researcher, CEO and founder of SingularityNET. A project combining artificial intelligence and blockchain to democratize access to artificial intelligence. Ben seeks to fulfil the original ambitions of the field. Ben graduated with a PhD in Mathematics from Temple University in 1990. Ben’s approach to AGI over many decades now has been inspired by many disciplines, but in particular from human cognitive psychology and computer science perspective. To date Ben’s work has been mostly theoretically-driven. Ben thinks that most of the deep learning approaches to AGI today try to model the brain. They may have a loose analogy to human neuroscience but they have not tried to derive the details of an AGI architecture from an overall conception of what a mind is. Ben thinks that what matters for creating human-level (or greater) intelligence is having the right information processing architecture, not the underlying mechanics via which the architecture is implemented.

Ben thinks that there is a certain set of key cognitive processes and interactions that AGI systems must implement explicitly such as; working and long-term memory, deliberative and reactive processing, perc biological systems tend to be messy, complex and integrative; searching for a single “algorithm of general intelligence” is an inappropriate attempt to project the aesthetics of physics or theoretical computer science into a qualitatively different domain.

Panel: Dr. Tim Scarfe, Dr. Yannic Kilcher, Dr. Keith Duggar.

Pod version: https://anchor.fm/machinelearningstreettalk/episodes/58-Dr–…e-e15p20i.

Artificial Intelligence is pretty much THE HOLY GRAIL of Future Technologies.
There is no big Company nor University, which is not working on the development of Artificial Intelligence.
Role models are often the superior performance of the biological brain, but that’s also a lot of work.
So a development team in Australia therefore wants to save tedious development time and insert brain cells into Computers!
You may think that sounds crazy?
But their first prototype is already learning faster than traditional Artificial Intelligences of computers.

How did they even do that? This is exactly what we will talk about in this video.