Toggle light / dark theme

A bio-inspired mechano-photonic artificial synapse

Multifunctional and diverse artificial neural systems can incorporate multimodal plasticity, memory and supervised learning functions to assist neuromorphic computation. In a new report, Jinran Yu and a research team in nanoenergy, nanoscience and materials science in China and the US., presented a bioinspired mechano-photonic artificial synapse with synergistic mechanical and optical plasticity. The team used an optoelectronic transistor made of graphene/molybdenum disulphide (MoS2) heterostructure and an integrated triboelectric nanogenerator to compose the artificial synapse. They controlled the charge transfer/exchange in the heterostructure with triboelectric potential and modulated the optoelectronic synapse behaviors readily, including postsynaptic photocurrents, photosensitivity and photoconductivity. The mechano-photonic artificial synapse is a promising implementation to mimic the complex biological nervous system and promote the development of interactive artificial intelligence. The work is now published on Science Advances.

Brain-inspired neural networks.

The human brain can integrate cognition, learning and memory tasks via auditory, visual, olfactory and somatosensory interactions. This process is difficult to be mimicked using conventional von Neumann architectures that require additional sophisticated functions. Brain-inspired neural networks are made of various synaptic devices to transmit information and process using the synaptic weight. Emerging photonic synapse combine the optical and electric neuromorphic modulation and computation to offer a favorable option with high bandwidth, fast speed and low cross-talk to significantly reduce power consumption. Biomechanical motions including touch, eye blinking and arm waving are other ubiquitous triggers or interactive signals to operate electronics during artificial synapse plasticization. In this work, Yu et al. presented a mechano-photonic artificial synapse with synergistic mechanical and optical plasticity.

Synthetic synapses get more like a real brain

The human brain, fed on just the calorie input of a modest diet, easily outperforms state-of-the-art supercomputers powered by full-scale station energy inputs. The difference stems from the multiple states of brain processes versus the two binary states of digital processors, as well as the ability to store information without power consumption—non-volatile memory. These inefficiencies in today’s conventional computers have prompted great interest in developing synthetic synapses for use in computers that can mimic the way the brain works. Now, researchers at King’s College London, UK, report in ACS Nano Letters an array of nanorod devices that mimic the brain more closely than ever before. The devices may find applications in artificial neural networks.

Efforts to emulate biological synapses have revolved around types of memristors with different resistance states that act like memory. However, unlike the the devices reported so far have all needed a reverse polarity to reset them to the initial state. “In the brain a change in the changes the output,” explains Anatoly Zayats, a professor at King’s College London who led the team behind the recent results. The King’s College London researchers have now been able to demonstrate this brain-like behavior in their synaptic synapses as well.

Zayats and team build an array of gold nanorods topped with a polymer junction (poly-L-histidine, PLH) to a metal contact. Either light or an electrical voltage can excite plasmons—collective oscillations of electrons. The plasmons release hot electrons into the PLH, gradually changing the chemistry of the polymer, and hence changing it to have different levels of conductivity or light emissivity. How the polymer changes depends on whether oxygen or hydrogen surrounds it. A chemically inert nitrogen chemical environment will preserve the state without any energy input required so that it acts as non-volatile memory.

Beating Moore’s Law: This photonic computer is 10X faster than NVIDIA GPUs using 90% less energy

Moore’s Law is dead, right? Not if we can get working photonic computers.

Lightmatter is building a photonic computer for the biggest growth area in computing right now, and according to CEO Nick Harris, it can be ordered now and will ship at the end of this year. It’s already much faster than traditional electronic computers a neural nets, machine learning for language processing, and AI for self-driving cars.

It’s the world’s first general purpose photonic AI accelerator, and with light multiplexing — using up to 64 different colors of light simultaneously — there’s long path of speed improvements ahead.

Links:
TechFirst transcripts: https://johnkoetsier.com/category/tech-first/
Forbes columns: https://www.forbes.com/sites/johnkoetsier/

Keep in touch: https://twitter.com/johnkoetsier

Photonic Neuromorphic Computing: The Future of AI?

Photonic computing processes information using light, whilst neuromorphic computing attempts to emulate the human brain. Bring the two together, and we may have the perfect platform for next generation AI, as this video explores.

If you like this video, you may also enjoy my previous episodes on:

Organic Computing:

Brain-Computer Interfaces:
https://www.youtube.com/watch?v=xMxJYhUg0pc.

More videos on computing and related topics can be found at:
https://www.youtube.com/explainingcomputers.

You may also like my ExplainingTheFuture channel at: https://www.youtube.com/explainingthefuture.

Deep Learning Is Hitting a Wall

Brain Scans of 1. rat, 2. crow, (both completed by end of 2022) ; 3. pig, 4. chimp, (both completed by end of 2023) 5. ending on human, (completed by end of 2025). While we create an AI feedback loop, to use best AI to build better AI s, all at same time. Aiming for Agi 2025–2029.


What would it take for artificial intelligence to make real progress?

Retina-inspired sensors for more adaptive visual perception

To monitor and navigate real-world environments, machines and robots should be able to gather images and measurements under different background lighting conditions. In recent years, engineers worldwide have thus been trying to develop increasingly advanced sensors, which could be integrated within robots, surveillance systems, or other technologies that can benefit from sensing their surroundings.

Researchers at Hong Kong Polytechnic University, Peking University, Yonsei University and Fudan University have recently created a new sensor that can collect data in various illumination conditions, employing a mechanism that artificially replicates the functioning of the retina in the human eye. This bio-inspired sensor, presented in a paper published in Nature Electronics, was fabricated using phototransistors made of molybdenum disulfide.

“Our research team started the research on five years ago,” Yang Chai, one of the researchers who developed the sensor, told TechXplore. “This emerging device can output light-dependent and history-dependent signals, which enables image integration, weak signal accumulation, spectrum analysis and other complicated image processing functions, integrating the multifunction of sensing, data storage and data processing in a single device.”