The trailer featured quotes from famous film critics panning Francis Ford Coppola’s previous films that appear to be made up by ChatGPT.
Category: robotics/AI – Page 265
Organoids, $500/month, last up to 100 days, used by select universities:
Designed by FinalSpark, biocomputers are a more efficient and low-energy alternative for training AI models.
Researchers at Argonne have developed an innovative technique that creates “fingerprints” of different materials that can be read and analyzed by a neural network to yield previously inaccessible information — https://bit.ly/3LCklZw.
The goal of the AI is just to treat the scattering patterns as…
Study shows how materials change as they are stressed and relaxed.
Have you ever wonder how SNNs work and their difference from traditional neural networks? Or how SNNs play an important role in computing beyond the Moore’s Law?
What is SNN?
Spiking neural network (SNN) is a new form of neural networks with biologically realistic mechanisms designed to emulate the efficiency and effectiveness of the biological brain.
Researchers worldwide can now create highly realistic brain cortical organoids — essentially miniature artificial brains with functioning neural networks — thanks to a proprietary protocol released this month by researchers at the University of California San Diego.
The new technique, published in Nature Protocols (“Generation of ‘semi-guided’ cortical organoids with complex neural oscillations”), paves the way for scientists to perform more advanced research regarding autism, schizophrenia and other neurological disorders in which the brain’s structure is usually typical, but electrical activity is altered. That’s according to Alysson Muotri, Ph.D., corresponding author and director of the UC San Diego Sanford Stem Cell Institute (SSCI) Integrated Space Stem Cell Orbital Research Center. The SSCI is directed by Dr. Catriona Jamieson, a leading physician-scientist in cancer stem cell biology whose research explores the fundamental question of how space alters cancer progression.
The newly detailed method allows for the creation of tiny replicas of the human brain so realistic that they rival “the complexity of the fetal brain’s neural network,” according to Muotri, who is also a professor in the UC San Diego School of Medicine’s Departments of Pediatrics and Cellular and Molecular Medicine. His brain replicas have already traveled to the International Space Station (ISS), where their activity was studied under conditions of microgravity.
In a study published in Cell Reports Physical Science (“Electro-Active Polymer Hydrogels Exhibit Emergent Memory When Embodied in a Simulated Game-Environment”), a team led by Dr Yoshikatsu Hayashi demonstrated that a simple hydrogel — a type of soft, flexible material — can learn to play the simple 1970s computer game ‘Pong’. The hydrogel, interfaced with a computer simulation of the classic game via a custom-built multi-electrode array, showed improved performance over time.
Dr Hayashi, a biomedical engineer at the University of Reading’s School of Biological Sciences, said: Our research shows that even very simple materials can exhibit complex, adaptive behaviours typically associated with living systems or sophisticated AI.
This opens up exciting possibilities for developing new types of ‘smart’ materials that can learn and adapt to their environment.
Brain-inspired navigation technologies combine environmental perception, spatial cognition, and target navigation to create a comprehensive navigation research system. Researchers have used various sensors to gather environmental data and enhance environmental perception using multimodal information fusion. In spatial cognition, a neural network model is used to simulate the navigation mechanism of the animal brain and to construct an environmental cognition map. However, existing models face challenges in achieving high navigation success rate and efficiency. In addition, the limited incorporation of navigation mechanisms borrowed from animal brains necessitates further exploration.
Recently, computation in memory becomes very hot due to the urgent needs of high computing efficiency in artificial intelligence applications. In contrast to von-neumann architecture, computation in memory technology avoids the data movement between CPU/GPU and memory which could greatly reduce the power consumption. Memristor is one ideal device which could not only store information with multi-bits, but also conduct computing using ohm’s law. To make the best use of the memristor in neuromorphic systems, a memristor-friendly architecture and the software-hardware collaborative design methods are essential, and the key problem is how to utilize the memristor’s analog behavior. We have designed a generic memristor crossbar based architecture for convolutional neural networks and perceptrons, which take full consideration of the analog characteristics of memristors. Furthermore, we have proposed an online learning algorithm for memristor based neuromorphic systems which overcomes the varation of memristor cells and endue the system the ability of reinforcement learning based on memristor’s analog behavior.
Full abstract and speaker details can be found here: https://nus.edu/3cSFD3e.
Register for free for all our upcoming webinars at https://nus.edu/3bcN9pS.
We’ve all heard about the potential of artificial intelligence in the life sciences field. In 2020, the launch of AlphaFold 2, pioneered by Google DeepMind, took the world by storm and marked a new age in protein structure prediction. But now, AlphaFold 3 is transforming the landscape again. In this news highlight, we explore the new tech, compare it to its predecessor and take a look to the future.
Before the AI revolution, protein structure prediction heavily relied on experimental methods, such as X-ray crystallography, NMR spectroscopy and, later, some complex computational methods like homology modelling. These methods were time consuming and costly, and were a major limiting step in drug discovery and development processes in particular. For years, scientists have been attempting to integrate the latest and greatest AI models into the field, in order to speed up the process and improve accuracy.
Enter AlphaFold, an artificial intelligence tool developed by Google’s DeepMind. The first version of the technology was released in 2018, but it was 2020’s AlphaFold 2 that made headlines – winning the prestigious Critical Assessment of Structure Prediction (CASP) 14 competition. Having gone through multiple major iterations, the most recent release, AlphaFold 3, is set to further transform the protein space. But what does it do, and how may it outperform its predecessor?