Toggle light / dark theme

Experiments in rodents have revealed that engrams exist as multiscale networks of neurons. An experience becomes stored as a potentially retrievable memory in the brain when excited neurons in a brain region such as the hippocampus or amygdala become recruited into a local ensemble. These ensembles combine with others in other regions, such as the cortex, into an “engram complex.” Crucial to this process of linking engram cells is the ability of neurons to forge new circuit connections, via processes known as “synaptic plasticity” and “dendritic spine formation.” Importantly, experiments show that the memory initially stored across an engram complex can be retrieved by its reactivation but may also persist “silently” even when memories cannot be naturally recalled, for instance in mouse models used to study memory disorders such as early stage Alzheimer’s disease.

“More than 100 years ago Semon put forth a law of engraphy,” wrote Josselyn, Senior Scientist at SickKids, Professor of Psychology and Physiology at the University of Toronto and Senior Fellow in the Brain, Mind & Consciousness Program at the Canadian Institute for Advanced Research, (CIFAR) and Tonegawa, Picower Professor of Biology and Neuroscience at the RIKEN-MIT Laboratory for Neural Circuit Genetics at MIT and Investigator of the Howard Hughes Medical Institute. “Combining these theoretical ideas with the new tools that allow researchers to image and manipulate engrams at the level of cell ensembles facilitated many important insights into memory function.”

“For instance, evidence indicates that both increased intrinsic excitability and synaptic plasticity work hand in hand to form engrams and that these processes may also be important in memory linking, memory retrieval, and memory consolidation.”

For as much as the field has learned, Josselyn and Tonegawa wrote, there are still important unanswered questions and untapped potential applications: How do engrams change over time? How can engrams and memories be studied more directly in humans? And can applying knowledge about biological engrams inspire advances in artificial intelligence, which in turn could feedback new insights into the workings of engrams?


A review in Science traces neuroscientists’ progress in studying the neural substrate for storing memories and raises key future questions for the field.

If you’re interested in mind uploading, I have a book that I highly recommend. Rethinking Consciousness is a book by Michael S. A. Graziano, who is a Princeton University professor of psychology and neuroscience.

Early in his book Graziano writes a short summary:

“This book, however, is written entirely for the general reader. In it, I attempt to spell out, as simply and clearly as possible, a promising scientific theory of consciousness — one that can apply equally to biological brains and artificial machines.”

The theory is Attention Schema Theory.

I found this work compelling because one of the main issues in mind uploading is how do you make an inanimate object (like a robot or a computer) conscious? Graziano’s Attention Schema Theory provides a methodology.

After reading the book, be sure to read the Appendix, in which he writes:

“First, it serves as a tutorial on the attention schema theory. The underlying logic of the theory will be described in its simplest form. Second, I hope that the exercise will show engineers a general path forward for artificial consciousness.”

“Brain activity synchronizes with sound waves, even without audible sound, through lip-reading, according to new research published in JNeurosci.”

https://www.eurekalert.org/pub_re…/2020–01/sfn-htl010220.php

For more news on neuroscience, artificial intelligence, and psychology, please like and follow our Facebook page: https://m.facebook.com/story.php?story_fbid=502518503709832&id=383136302314720


Copyright © 2020 by the American Association for the Advancement of Science (AAAS)

“Finally, Artifical [sic] Intelligence that will make you wonder which one of you is real,” reads one of Kapur’s recent tweets, with another urging CES visitors to stop by the NEON corner to learn more about “an Artificial Intelligence being as your best friend.”

Not Bixby

One thing Samsung will say about NEON is that it is not related to the company’s AI-powered digital assistant Bixby.

If you drive along the main northern road through South Australia with a good set of binoculars, you may soon be able to catch a glimpse of a strange, windowless jet, one that is about to embark on its maiden flight. It’s a prototype of the next big thing in aerial combat: a self-piloted warplane designed to work together with human-piloted aircraft.

The Royal Australian Air Force (RAAF) and Boeing Australia are building this fighterlike plane for possible operational use in the mid-2020s. Trials are set to start this year, and although the RAAF won’t confirm the exact location, the quiet electromagnetic environment, size, and remoteness of the Woomera Prohibited Area make it a likely candidate. Named for ancient Aboriginal spear throwers, Woomera spans an area bigger than North Korea, making it the largest weapons-testing range on the planet.

The autonomous plane, formally called the Airpower Teaming System but often known as “Loyal Wingman,” is 11 meters (38 feet) long and clean cut, with sharp angles offset by soft curves. The look is quietly aggressive.

Happy New Year! 2019 has seen a number of milestones for Agility, including the final deliveries of Cassie and the launch of Digit. To celebrate, we’ve compiled a supercut of (mostly) never-before-seen testing footage. Here’s hoping 2020 is as robotastic as its predecessor — a big thanks to all of our employees for their hard work.

We often hear this word used in Transhumanist (H+) discussions, but what is meant by it? After all, if H+ is about using scitech to enhance Human capabilities via internal modifications what does it mean to go beyond these? In the following I intend to delineate possible stages of enhancement from what exists today to what could exist as an endpoint of this process in centuries to come.

Although I have tried to put it in what I believe to be a plausible chronological order a great deal depends on major unknowns, most especially the rapidity with which Artificial Intelligence (AI) develops over the next few decades. Although AI and biotech are at present evolving separately and in parallel I would expect at some point fairly soon for there to be a massive crossover. Exactly how or when that might happen is again a moot question. There is also a somewhat artificial distinction between machines and biology, which exists because our current machines are so primitive. Once we have a fully functioning nanotechnology, just like Nature’s existing nanotech (life), that distinction will disappear completely.

Lidar can be the third eye and an essential component for safe driving in your automated car’s future. That is the word from Bosch. They want the world to know that two is not ideal company; three is better company. Cameras and radar alone don’t cut it.

CES is just around the corner and Bosch wants to make some noise at the event about its new lidar system which will make its debut there. The Bosch entry is described as a long-range lidar sensor suitable for car use.

The company is posing a question that makes it difficult to refuse: Do you want safety or do you want the highest level of safety? Two things Bosch wants you to know: it can work in both highway and city driving scenarios, as said in the company release, that “Bosch sensor will cover both long and close ranges—on highways and in the city” and it will work in concert with cameras and radar.