Toggle light / dark theme

Scientists have fused brain-like tissue with electronics to make an ‘organoid neural network’ that can recognise voices and solve a complex mathematical problem. Their invention extends neuromorphic computing – the practice of modelling computers after the human brain – to a new level by directly including brain tissue in a computer.

The system was developed by a team of researchers from Indiana University, Bloomington; the University of Cincinnati and Cincinnati Children’s Hospital Medical Centre, Cincinnati; and the University of Florida, Gainesville. Their findings were published on December 11.

Exploring pre-trained models for research often poses a challenge in Machine Learning (ML) and Deep Learning (DL). Visualizing the architecture of these models usually demands setting up the specific framework they were trained on, which can be quite laborious. Without this framework, comprehending the model’s structure becomes cumbersome for AI researchers.

Some solutions enable model visualization but involve setting up the entire framework for training the model. This process can be time-consuming and intricate, deterring quick access to model architectures.

One solution to simplify the visualization of ML/DL models is the open-source tool called Netron. This tool functions as a viewer specifically designed for neural networks, supporting frameworks like TensorFlow Lite, ONNX, Caffe, Keras, etc. Netron bypasses the need to set up individual frameworks by directly presenting the model architecture, making it accessible and convenient for researchers.

How the brain adjusts connections between #neurons during learning: this new insight may guide further research on learning in brain networks and may inspire faster and more robust learning #algorithms in #artificialintelligence.


Researchers from the MRC Brain Network Dynamics Unit and Oxford University’s Department of Computer Science have set out a new principle to explain how the brain adjusts connections between neurons during learning. This new insight may guide further research on learning in brain networks and may inspire faster and more robust learning algorithms in artificial intelligence.

The essence of learning is to pinpoint which components in the information-processing pipeline are responsible for an error in output. In , this is achieved by backpropagation: adjusting a model’s parameters to reduce the error in the output. Many researchers believe that the brain employs a similar learning principle.

However, the biological brain is superior to current machine learning systems. For example, we can learn new information by just seeing it once, while artificial systems need to be trained hundreds of times with the same pieces of information to learn them. Furthermore, we can learn new information while maintaining the knowledge we already have, while learning new information in artificial neural networks often interferes with existing knowledge and degrades it rapidly.

All these previous innovations pale in the face of tools like MidJourney, DALL-E, and Adobe Firefly.

These generative AI image systems, the kind that easily spits out this image below of a flooded downtown Manhattan, are dream weavers that make the literal out of the imagined.

When Midjourney builds an image, there are no easily identifiable sources, mediums, or artists. Every pixel can look as imaginary or real as you want and when they leave the digital factory, these images (and video) travel fleetfooted around the world, leaving truth waiting somewhere in the wilderness.

Samsung is planning to release what might be the most advanced cleaning robot yet: a robot vacuum and mopper that will steam clean floors and use AI to detect stains.

The upcoming “Bespoke Jet Bot Combo” cleaner will have a charging base that will auto wash, clean, and dry the robot’s mop pads.

The device will also use AI to detect floor types, objects, and spot stains. When the robot detects a stain, “it goes back to the clean station to heat the mop pads with high-temperature steam and water and then returns to the area,” says a press release on Samsung’s website.

“[People] like themselves just as they are,” says Marvin Minsky. “Perhaps they are not selfish enough, or imaginative, or ambitious. Myself, I don’t much like how people are now. We’re too shallow, slow, and ignorant. I hope that our future will lead us to ideas that we can use to improve ourselves.”

Marvin believes that it is important that we “understand how our minds are built, and how they support the modes of thought that we like to call emotions. Then we’ll be better able to decide what we like about them, and what we don’t—and bit by bit we’ll rebuild ourselves.”

Marvin Minsky is the leading light of AI—artificial intelligence, that is. He sees the brain as a myriad of structures. Scientists who, like Minsky, take the strong AI view believe that a computer model of the brain will be able to explain what we know of the brain’s cognitive abilities. Minsky identifies consciousness with high-level, abstract thought, and believes that in principle machines can do everything a conscious human being can do.