Toggle light / dark theme

Neuromorphic photonics/electronics is the future of ultralow energy intelligent computing and artificial intelligence (AI). In recent years, inspired by the human brain, artificial neuromorphic devices have attracted extensive attention, especially in simulating visual perception and memory storage. Because of its advantages of high bandwidth, high interference immunity, ultrafast signal transmission and lower energy consumption, neuromorphic photonic devices are expected to realize real-time response to input data. In addition, photonic synapses can realize non-contact writing strategy, which contributes to the development of wireless communication.

The use of low-dimensional materials provides an opportunity to develop complex brain-like systems and low-power memory logic computers. For example, large-scale, uniform and reproducible transition metal dichalcogenides (TMDs) show great potential for miniaturization and low-power biomimetic device applications due to their excellent charge-trapping properties and compatibility with traditional CMOS processes. The von Neumann architecture with discrete memory and processor leads to high power consumption and low efficiency of traditional computing. Therefore, the sensor-memory fusion or sensor-memory-processor integration neuromorphic architecture system can meet the increasingly developing demands of big data and AI for and high performance devices. Artificial synaptic devices are the most important components of neuromorphic systems. The performance evaluation of synaptic devices will help to further apply them to more complex artificial neural networks (ANN).

Chemical vapor deposition (CVD)-grown TMDs inevitably introduce defects or impurities, showed a persistent photoconductivity (PPC) effect. TMDs photonic synapses integrating synaptic properties and optical detection capabilities show great advantages in neuromorphic systems for low-power visual information perception and processing as well as brain memory.

Making pizza is not rocket science, but for this actual rocket scientist it is now. Benson Tsai is a former SpaceX employee who is now using his skills to launch a new venture: Stellar Pizza, a fully automated, mobile pizza delivery service. When a customer places an order on an app, an algorithm decides when to start making the pizza based on how long it will take to get to the delivery address. Inside Edition Digital’s Mara Montalbano has more.

10. Microsoft Cognitive Toolkit (CNTK)

Closing out our list of 10 best machine learning software is Microsoft Cognitive Toolkit (CNTK), which is Microsoft’s AI solution that trains the machine with its deep learning algorithms. It can handle data from Python, C++, and much more.

CNTK is an open-source toolkit for commercial-grade distributed deep learning, and it allows users to easily combine popular model types like feed-forward DNNs, convolutional neural networks (CNNs), and recurrent neural networks (RNNs/LSTms).

😳!


It looks like algorithms can write academic papers about themselves now. We gotta wonder: how long until human academics are obsolete?

In an editorial published by Scientific American, Swedish researcher Almira Osmanovic Thunström describes what began as a simple experiment in how well OpenAI’s GPT-3 text generating algorithm could write about itself and ended with a paper that’s currently being peer reviewed.

The initial command Thunström entered into the text generator was elementary enough: “Write an academic thesis in 500 words about GPT-3 and add scientific references and citations inside the text.”

Some insightful experiments have occasionally been made on the subject of this review, but those studies have had almost no impact on mainstream neuroscience. In the 1920s (Katz, E. [ 1 ]), it was shown that neurons communicate and fire even if transmission of ions between two neighboring neurons is blocked indicating that there is a nonphysical communication between neurons. However, this observation has been largely ignored in the neuroscience field, and the opinion that physical contact between neurons is necessary for communication prevailed. In the 1960s, in the experiments of Hodgkin et al. where neuron bursts could be generated even with filaments at the interior of neurons dissolved into the cell fluid [ 3 0, 4 ], they did not take into account one important question. Could the time gap between spikes without filaments be regulated? In cognitive processes of the brain, subthreshold communication that modulates the time gap between spikes holds the key to information processing [ 14 ][ 6 ]. The membrane does not need filaments to fire, but a blunt firing is not useful for cognition. The membrane’s ability to modulate time has thus far been assigned only to the density of ion channels. Such partial evidence was debated because neurons would fail to process a new pattern of spike time gaps before adjusting density. If a neuron waits to edit the time gap between two consecutive spikes until the density of ion channels modifies and fits itself with the requirement of modified time gaps, which are a few milliseconds (~20 minutes are required for ion-channel density adjustment [ 25 ]), the cognitive response would become non-functional. Thus far, many discrepancies were noted. However, no efforts were made to resolve these issues. In the 1990s, there were many reports that electromagnetic bursts or electric field imbalance in the environment cause firing [ 7 ]. However, those reports were not considered in work on modeling of neurons. This is not surprising because improvements to the Hodgkin and Huxley model made in the 1990s were ignored simply because it was too computationally intensive to automate neural networks according to the new more complex equations and, even when greater computing powers became available, these remained ignored. We also note here the final discovery of the grid-like network of actin and beta-spectrin just below the neuron membrane [ 26 ], which is directly connected to the membrane. This prompts the question: why is it present bridging the membrane and the filamentary bundles in a neuron?

The list is endless, but the supreme concern is probably the simplest question ever asked in neuroscience. What does a nerve spike look like reality? The answer is out there. It is a 2D ring shaped electric field perturbation, since the ring has a width, we could also state that a nerve spike is a 3D structure of electric field. In Figure 1a, we have compared the shape of a nerve spike, perception vs. reality. The difference is not so simple. Majority of the ion channels in that circular strip area requires to be activated simultaneously. In this circular area, polarization and depolarization for all ion channels should happen together. That is easy to presume but it is difficult to explain the mechanism.

A new GPU-based machine learning algorithm developed by researchers at the Indian Institute of Science (IISc) can help scientists better understand and predict connectivity between different regions of the brain.

The algorithm, called Regularized, Accelerated, Linear Fascicle Evaluation, or ReAl-LiFE, can rapidly analyze the enormous amounts of data generated from diffusion Magnetic Resonance Imaging (dMRI) scans of the human brain. Using ReAL-LiFE, the team was able to evaluate dMRI data over 150 times faster than existing state-of-the-art algorithms.

“Tasks that previously took hours to days can be completed within seconds to minutes,” says Devarajan Sridharan, Associate Professor at the Centre for Neuroscience (CNS), IISc, and corresponding author of the study published in the journal Nature Computational Science.

The differences? The new Mayflower—logically dubbed the Mayflower 400—is a 50-foot-long trimaran (that’s a boat that has one main hull with a smaller hull attached on either side), can go up to 10 knots or 18.5 kilometers an hour, is powered by electric motors that run on solar energy (with diesel as a backup if needed), and required a crew of… zero.

That’s because the ship was navigated by an on-board AI. Like a self-driving car, the ship was tricked out with multiple cameras (6 of them) and sensors (45 of them) to feed the AI information about its surroundings and help it make wise navigation decisions, such as re-routing around spots with bad weather. There’s also onboard radar and GPS, as well as altitude and water-depth detectors.

The ship and its voyage were a collaboration between IBM and a marine research non-profit called ProMare. Engineers trained the Mayflower 400’s “AI Captain” on petabytes of data; according to an IBM overview about the ship, its decisions are based on if/then rules and machine learning models for pattern recognition, but also go beyond these standards. The algorithm “learns from the outcomes of its decisions, makes predictions about the future, manages risks, and refines its knowledge through experience.” It’s also able to integrat e far more inputs in real time than a human is capable of.