Toggle light / dark theme

In machine learning, understanding why a model makes certain decisions is often just as important as whether those decisions are correct. For instance, a machine-learning model might correctly predict that a skin lesion is cancerous, but it could have done so using an unrelated blip on a clinical photo.

While tools exist to help experts make sense of a model’s reasoning, often these methods only provide insights on one decision at a time, and each must be manually evaluated. Models are commonly trained using millions of data inputs, making it almost impossible for a human to evaluate enough decisions to identify patterns.

Now, researchers at MIT and IBM Research have created a method that enables a user to aggregate, sort, and rank these individual explanations to rapidly analyze a ’s behavior. Their technique, called Shared Interest, incorporates quantifiable metrics that compare how well a model’s reasoning matches that of a human.

The dissemination of synthetic biology into materials science is creating an evolving class of functional, engineered living materials that can grow, sense and adapt similar to biological organisms.

Nature has long served as inspiration for the design of materials with improved properties and advanced functionalities. Nonetheless, thus far, no synthetic material has been able to fully recapitulate the complexity of living materials. Living organisms are unique due to their multifunctionality and ability to grow, self-repair, sense and adapt to the environment in an autonomous and sustainable manner. The field of engineered living materials capitalizes on these features to create biological materials with programmable functionalities using engineering tools borrowed from synthetic biology. In this focus issue we feature a Perspective and an Article to highlight how synergies between synthetic biology and biomaterial sciences are providing next-generation engineered living materials with tailored functionalities.

Whether a computer could ever pass for a living thing is one of the key challenges for researchers in the field of Artificial Intelligence. There have been vast advancements in AI since Alan Turing first created what is now called the Turing Test—whether a machine could exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. However, machines still struggle with one of the fundamental skills that is second nature for humans and other life forms: lifelong learning. That is, learning and adapting while we’re doing a task without forgetting previous tasks, or intuitively transferring knowledge gleaned from one task to a different area.

Now, with the support of the DARPA Lifelong Learning Machines (L2M) program, USC Viterbi researchers have collaborated with colleagues at institutions from around the U.S. and the world on a new resource for the future of AI learning, defining how artificial systems can successfully think, act and adapt in the real world, in the same way that living creatures do.

The paper, co-authored by Dean’s Professor of Electrical and Computer Engineering Alice Parker and Professor of Biomedical Engineering, and of Biokinesiology and Physical Therapy, Francisco Valero-Cuevas and their research teams, was published in Nature Machine Intelligence, in collaboration with Professor Dhireesha Kudithipudi at the University of Texas at San Antonio, along with 22 other universities.

Need to get 24/7 solar panels up and running.


Inspired by the cognitive science theory, we explicitly model an agent with.

Both semantic and episodic memory systems, and show that it is better than.
having just one of the two memory systems. In order to show this, we have.
designed and released our own challenging environment, “the Room”, compatible.

With OpenAI Gym, where an agent has to properly learn how to encode, store, and.

Retrieve memories to maximize its rewards. The Room environment allows for a.
hybrid intelligence setup where machines and humans can collaborate. We show.

That two agents collaborating with each other results in better performance.

“Inspired by the cognitive science theory, we explicitly model an agent with both semantic and episodic memory systems, and show that it is better than having just one of the two memory systems. In order to show this, we have designed and released our own challenging environment, ” the Room”, compatible with OpenAI Gym, where an agent has to properly learn how to encode, store, and retrieve memories to maximize its rewards. The Room environment allows for a hybrid intelligence… See more.


Inspired by the cognitive science theory, we explicitly model an agent with.

Both semantic and episodic memory systems, and show that it is better than.
having just one of the two memory systems. In order to show this, we have.
designed and released our own challenging environment, the Room, compatible.

With OpenAI Gym, where an agent has to properly learn how to encode, store, and.

Retrieve memories to maximize its rewards. The Room environment allows for a.
hybrid intelligence setup where machines and humans can collaborate. We show.

That two agents collaborating with each other results in better performance.

The battle between artificial intelligence and human intelligence has been going on for a while not and AI is clearly coming very close to beating humans in many areas as of now. Partially due to improvements in neural network hardware and also improvements in machine learning algorithms. This video goes over whether and how humans could soon be surpassed by artificial general intelligence.

TIMESTAMPS:
00:00 Is AGI actually possible?
01:11 What is Artificial General Intelligence?
03:34 What are the problems with AGI?
05:43 The Ethics behind Artificial Intelligence.
08:03 Last Words.

#ai #agi #robots

In recent years, large neural networks trained for language understanding and generation have achieved impressive results across a wide range of tasks. GPT-3 first showed that large language models (LLMs) can be used for few-shot learning and can achieve impressive results without large-scale task-specific data collection or model parameter updating. More recent LLMs, such as GLaM, LaMDA, Gopher, and Megatron-Turing NLG, achieved state-of-the-art few-shot results on many tasks by scaling model size, using sparsely activated modules, and training on larger datasets from more diverse sources. Yet much work remains in understanding the capabilities that emerge with few-shot learning as we push the limits of model scale.

Last year Google Research announced our vision for Pathways, a single model that could generalize across domains and tasks while being highly efficient. An important milestone toward realizing this vision was to develop the new Pathways system to orchestrate distributed computation for accelerators. In “PaLM: Scaling Language Modeling with Pathways”, we introduce the Pathways Language Model (PaLM), a 540-billion parameter, dense decoder-only Transformer model trained with the Pathways system, which enabled us to efficiently train a single model across multiple TPU v4 Pods. We evaluated PaLM on hundreds of language understanding and generation tasks, and found that it achieves state-of-the-art few-shot performance across most tasks, by significant margins in many cases.