Toggle light / dark theme

Need to get 24/7 solar panels up and running.


Inspired by the cognitive science theory, we explicitly model an agent with.

Both semantic and episodic memory systems, and show that it is better than.
having just one of the two memory systems. In order to show this, we have.
designed and released our own challenging environment, “the Room”, compatible.

With OpenAI Gym, where an agent has to properly learn how to encode, store, and.

Retrieve memories to maximize its rewards. The Room environment allows for a.
hybrid intelligence setup where machines and humans can collaborate. We show.

That two agents collaborating with each other results in better performance.

“Inspired by the cognitive science theory, we explicitly model an agent with both semantic and episodic memory systems, and show that it is better than having just one of the two memory systems. In order to show this, we have designed and released our own challenging environment, ” the Room”, compatible with OpenAI Gym, where an agent has to properly learn how to encode, store, and retrieve memories to maximize its rewards. The Room environment allows for a hybrid intelligence… See more.


Inspired by the cognitive science theory, we explicitly model an agent with.

Both semantic and episodic memory systems, and show that it is better than.
having just one of the two memory systems. In order to show this, we have.
designed and released our own challenging environment, the Room, compatible.

With OpenAI Gym, where an agent has to properly learn how to encode, store, and.

Retrieve memories to maximize its rewards. The Room environment allows for a.
hybrid intelligence setup where machines and humans can collaborate. We show.

That two agents collaborating with each other results in better performance.

The battle between artificial intelligence and human intelligence has been going on for a while not and AI is clearly coming very close to beating humans in many areas as of now. Partially due to improvements in neural network hardware and also improvements in machine learning algorithms. This video goes over whether and how humans could soon be surpassed by artificial general intelligence.

TIMESTAMPS:
00:00 Is AGI actually possible?
01:11 What is Artificial General Intelligence?
03:34 What are the problems with AGI?
05:43 The Ethics behind Artificial Intelligence.
08:03 Last Words.

#ai #agi #robots

In recent years, large neural networks trained for language understanding and generation have achieved impressive results across a wide range of tasks. GPT-3 first showed that large language models (LLMs) can be used for few-shot learning and can achieve impressive results without large-scale task-specific data collection or model parameter updating. More recent LLMs, such as GLaM, LaMDA, Gopher, and Megatron-Turing NLG, achieved state-of-the-art few-shot results on many tasks by scaling model size, using sparsely activated modules, and training on larger datasets from more diverse sources. Yet much work remains in understanding the capabilities that emerge with few-shot learning as we push the limits of model scale.

Last year Google Research announced our vision for Pathways, a single model that could generalize across domains and tasks while being highly efficient. An important milestone toward realizing this vision was to develop the new Pathways system to orchestrate distributed computation for accelerators. In “PaLM: Scaling Language Modeling with Pathways”, we introduce the Pathways Language Model (PaLM), a 540-billion parameter, dense decoder-only Transformer model trained with the Pathways system, which enabled us to efficiently train a single model across multiple TPU v4 Pods. We evaluated PaLM on hundreds of language understanding and generation tasks, and found that it achieves state-of-the-art few-shot performance across most tasks, by significant margins in many cases.

The pretraining of BERT-type large language models — which can scale up to billions of parameters — is crucial for obtaining state-of-the-art performance on many natural language processing (NLP) tasks. This pretraining process however is expensive, and has become a bottleneck hindering the industrial application of such large language models.

In the new paper Token Dropping for Efficient BERT Pretraining, a research team from Google, New York University, and the University of Maryland proposes a simple but effective “token dropping” technique that significantly reduces the pretraining cost of transformer models such as BERT, without degrading performance on downstream fine-tuning tasks.

The team summarizes their main contributions as:

To effectively interact with humans in crowded social settings, such as malls, hospitals, and other public spaces, robots should be able to actively participate in both group and one-to-one interactions. Most existing robots, however, have been found to perform much better when communicating with individual users than with groups of conversing humans.

Hooman Hedayati and Daniel Szafir, two researchers at University of North Carolina at Chapel Hill, have recently developed a new data-driven technique that could improve how robots communicate with groups of humans. This method, presented in a paper presented at the 2022 ACM/IEEE International Conference on Human-Robot Interaction (HRI ‘22), allows robots to predict the positions of humans in conversational groups, so that they do not mistakenly ignore a person when their sensors are fully or partly obstructed.

“Being in a conversational group is easy for humans but challenging for robots,” Hooman Hedayati, one of the researchers who carried out the study, told TechXplore. “Imagine that you are talking with a group of friends, and whenever one of your friends blinks, she stops talking and asks if you are still there. This potentially annoying scenario is roughly what can happen when a robot is in conversational groups.”

Scientists at The University of Texas at Austin have redesigned a key component of a widely used CRISPR-based gene-editing tool, called Cas9, to be thousands of times less likely to target the wrong stretch of DNA while remaining just as efficient as the original version, making it potentially much safer.

Other labs have redesigned Cas9 to reduce off-target interactions, but so far, all these versions improve accuracy by sacrificing speed. SuperFi-Cas9, as this new version has been dubbed, is 4,000 times less likely to cut off-target sites but just as fast as naturally occurring Cas9. Bravo says you can think of the different lab-generated versions of Cas9 as different models of self-driving cars. Most models are really safe, but they have a top speed of 10 miles per hour.

“They’re safer than the naturally occurring Cas9, but it comes at a big cost: They’re going extremely slowly,” said Bravo. “SuperFi-Cas9 is like a self-driving car that has been engineered to be extremely safe, but it can still go at full speed.”

Physics World Stories podcast, Andrew Glester catches up with two engineers from the UK Atomic Energy Authority to learn more about this latest development. Leah Morgan, a physicist-turned-engineer explains why JET’s recent success is great news for the ITER project – a larger experimental fusion reactor currently under construction in Cadarache, France. Later in the episode, mechanical design engineer Helena Livesey talks about the important role of robotics for accessing equipment within the extreme conditions inside a tokamak device.