Toggle light / dark theme

Dr. Joscha Bach (MIT Media Lab and the Harvard Program for Evolutionary Dynamics) is an AI researcher who works and writes about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems.

He is founder of the MicroPsi project, in which virtual agents are constructed and used in a computer model to discover and describe the interactions of emotion, motivation, and cognition of situated agents.

Bach’s mission to build a model of the mind is the bedrock research in the creation of Strong AI, i.e. cognition on par with that of a human being. He is especially interested in the philosophy of AI and in the augmentation of the human mind.

July 25th, 2017

Eleven-year-old Simons only took a year to complete his bachelor’s degree, which usually takes at least three years.

In a conversation with the Dutch daily De Telegraaf, Simons said that, “I don’t really care if I’m the youngest.” “It’s all about getting knowledge for me.”

“This is the first puzzle piece in my goal of replacing body parts with mechanical parts,” Simons said.

How do you find novel materials with very specific properties—for example, special electronic properties which are needed for quantum computers? This is usually a very complicated task: various compounds are created, in which potentially promising atoms are arranged in certain crystal structures, and then the material is examined, for example in the low-temperature laboratory of TU Wien.

Now, a cooperation between Rice University (Texas), TU Wien and other international research institutions has succeeded in tracking down suitable materials on the computer. New theoretical methods are used to identify particularly promising candidates from the vast number of possible materials. Measurements at TU Wien have shown the materials do indeed have the required properties and the method works. This is an important step forward for research on quantum materials. The results have now been published in the journal Nature Physics.

UNIVERSITY PARK, Pa.— A person’s distrust in humans predicts they will have more trust in artificial intelligence’s ability to moderate content online, according to a recently published study. The findings, the researchers say, have practical implications for both designers and users of AI tools in social media.

“We found a systematic pattern of individuals who have less trust in other humans showing greater trust in AI’s classification,” said S. Shyam Sundar, the James P. Jimirro Professor of Media Effects at Penn State. “Based on our analysis, this seems to be due to the users invoking the idea that machines are accurate, objective and free from ideological bias.”

The study, published in the journal of New Media & Society also found that “power users” who are experienced users of information technology, had the opposite tendency. They trusted the AI moderators less because they believe that machines lack the ability to detect nuances of human language.

Watch NVIDIA CEO Jensen Huang unveil the new Ada Lovelace GPU architecture, new advances to its computing platforms, and new cloud services to further the era of AI and the metaverse, and transform every industry.

Dive into the announcements and discover more content at https://www.nvidia.com/gtc.

00:00 GeForce Beyond: A Special Broadcast at GTC
19:34 NVIDIA Omniverse.
37:07 NVIDIA Robotics Platforms: Isaac, DRIVE, Clara Holoscan, Metropolis.
57:22 NVIDIA AI
1:08:48 Large Language Models.
1:15:39 Hopper and Grace Hopper.
1:21:47 AI & Omniverse Services to Enterprise.
1:30:25 Closing Summary.

Follow NVIDIA on Twitter:

Inflated expectations around the capabilities of AI technologies may lead people to believe that computers can’t be wrong. The truth is AI failures are not a matter of if but when. AI is a human endeavor that combines information about people and the physical world into mathematical constructs. Such technologies typically rely on statistical methods, with the possibility for errors throughout an AI system’s lifespan. As AI systems become more widely used across domains, especially in high-stakes scenarios where people’s safety and wellbeing can be affected, a critical question must be addressed: how trustworthy are AI systems, and how much and when should people trust AI?

For the memory prosthetic, the team focused on two specific regions: CA1 and CA3, which form a highly interconnected neural circuit. Decades of work in rodents, primates, and humans have pointed to this neural highway as the crux for encoding memories.

The team members, led by Drs. Dong Song from the University of Southern California and Robert Hampson at Wake Forest School of Medicine, are no strangers to memory prosthetics. With “memory bioengineer” Dr. Theodore Berger—who’s worked on hijacking the CA3-CA1 circuit for memory improvement for over three decades—the dream team had their first success in humans in 2015.

The central idea is simple: replicate the hippocampus’ signals with a digital replace ment. It’s no easy task. Unlike computer circuits, neural circuits are non-linear. This means that signals are often extremely noisy and overlap in time, which bolsters—or inhibits—neural signals. As Berger said at the time: “It’s a chaotic black box.”