Toggle light / dark theme

Using machine learning, a computer model can teach itself to smell in just a few minutes. When it does, researchers have found, it builds a neural network that closely mimics the olfactory circuits that animal brains use to process odors.

Animals from fruit flies to humans all use essentially the same strategy to process olfactory information in the brain. But neuroscientists who trained an artificial neural network to take on a simple odor classification task were surprised to see it replicate biology’s strategy so faithfully.

Full Story:


When asked to classify odors, artificial neural networks adopt a structure that closely resembles that of the brain’s olfactory circuitry.

If the properties of materials can be reliably predicted, then the process of developing new products for a huge range of industries can be streamlined and accelerated. In a study published in Advanced Intelligent Systems, researchers from The University of Tokyo Institute of Industrial Science used core-loss spectroscopy to determine the properties of organic molecules using machine learning.

The spectroscopy techniques energy loss near-edge structure (ELNES) and X-ray near-edge structure (XANES) are used to determine information about the electrons, and through that the atoms, in materials. They have high sensitivity and high resolution and have been used to investigate a range of materials from electronic devices to drug delivery systems.

However, connecting spectral data to the properties of a material—things like optical properties, electron conductivity, density, and stability—remains ambiguous. Machine learning (ML) approaches have been used to extract information for large complex sets of data. Such approaches use artificial neural networks, which are based on how our brains work, to constantly learn to solve problems. Although the group previously used ELNES/XANES spectra and ML to find out information about materials, what they found did not relate to the properties of the material itself. Therefore, the information could not be easily translated into developments.

1:42 Are we on the wrong train to AGI?
4:20 Marvin Minsky and AI generalization problem.
11:57 Defining intelligence in AI
17:17 Is AI masquerading as a trendy statistical analysis tool?
23:35 AI systems lack our most basic intuitions.
27:38 The public not wanting to face Reality.
29:36 Equipping AI with Kant’s categories of the mind (Time, Space, Causality)
33:40 Neural nets VS traditional tools.
34:50 Causality in AI
37:14 Lack of interdisciplinary learning.
45:54 How can we achieve human level of understanding in AI?
49:21 More limitations.
59:35 Motivation in inanimate systems.
1:01:31 Lack of body and transcendent consciousness.
1:05:55 What interdisciplinary learning would you encourage?
1:06:49 Book recommendations.

Gary Marcus is CEO and Founder of Robust AI, well-known machine learning scientist and entrepreneur, author, and Professor Emeritus at New York State University.

Dr. Marcus attended Hampshire College, where he designed his own major, cognitive science, working on human reasoning. He continued on to graduate school at Massachusetts Institute of Technology, where his advisor was the experimental psychologist Steven Pinker. He received his Ph.D. in 1993.

His books include The Algebraic Mind: Integrating Connectionism and Cognitive Science, The Birth of the Mind: How a Tiny Number of Genes Creates the Complexities of Human Thought, Kluge: The Haphazard Construction of the Human Mind, a New York Times Editors’ Choice, and Guitar Zero, which appeared on the New York Times Bestseller list. He edited The Norton Psychology Reader, and was co-editor with Jeremy Freeman of The Future of the Brain: Essays by the World’s Leading Neuroscientist, which included Nobel Laureates May-Britt Moser and Edvard Moser. Together with Ernie Davis, he authored Rebooting AI and is well known to deconstruct myths of the AI community.

Neural Implant Podcast w/ Ladan Jiracek: https://open.spotify.com/show/7qzl8f0yllPaYlBmW9CX3u?si=aXiWglMkR8Wkw8YGLD81nQ

Join this channel to get access to perks:
https://www.youtube.com/channel/UCDukC60SYLlPwdU9CWPGx9Q/join.

Neura Pod is a series covering topics related to Neuralink, Inc. Topics such as brain-machine interfaces, brain injuries, and artificial intelligence will be explored. Host Ryan Tanaka synthesizes informationopinions, and conducts interviews to easily learn about Neuralink and its future.

Most people aren’t aware of what the company does, or how it does it. If you know other people who are curious about what Neuralink is doing, this is a nice summary episode to share. Tesla, SpaceX, and the Boring Company are going to have to get used to their newest sibling. Neuralink is going to change how humans think, act, learn, and share information.

Convolutional neural networks running on quantum computers have generated significant buzz for their potential to analyze quantum data better than classical computers can. While a fundamental solvability problem known as “barren plateaus” has limited the application of these neural networks for large data sets, new research overcomes that Achilles heel with a rigorous proof that guarantees scalability.

“The way you construct a quantum neural can lead to a barren plateau—or not,” said Marco Cerezo, co-author of the paper titled “Absence of Barren Plateaus in Quantum Convolutional Neural Networks,” published today by a Los Alamos National Laboratory team in Physical Review X. Cerezo is a physicist specializing in , , and at Los Alamos. “We proved the absence of barren plateaus for a special type of quantum neural network. Our work provides trainability guarantees for this architecture, meaning that one can generically train its parameters.”

As an (AI) methodology, quantum are inspired by the visual cortex. As such, they involve a series of convolutional layers, or filters, interleaved with pooling layers that reduce the dimension of the data while keeping important features of a data set.

A truck fleet accident costs an average of $16,500 in damages and $57,500 in injury-related costs for a total of $74,000. “This does not include a broad range of ‘hidden’ costs, including reduced vehicle value (typically anywhere from $500 to $2,000), higher insurance premium, legal fees, driver turnover (the average driver replacement cost = $8,200), lost employee time, lost vehicle-use time, administrative burden, reduced employee morale and bad publicity,” said Yoav Banin, chief product officer at Nauto, which provides artificial intelligence driver and fleet performance solutions.

Emphasis on truck driving safety is well placed, considering other challenges that the trucking industry is facing.

Ranking first is a chronic shortage of truck drivers nationwide that could force fleet operators to hire less-experienced drivers who require operator and safety training. Driver compensation and truck parking ranked second and third, but immediately behind them in fourth and fifth position were driver truck fleet safety and insurance availability, which depends on safe driving records.

Artificial intelligence expert Timnit Gebru on the challenges researchers can face at Big Tech companies, and how to protect workers and their research.

Artificial intelligence research leads to new cutting-edge technologies, but it’s expensive. Big Tech companies, which are powered by AI and have deep pockets, often take on this work — but that gives them the power to censor or impede research that casts them in an unfavorable light, according to Timnit Gebru, a computer scientist, co-founder of the nonprofit organization Black in AI and the former co-leader of Google’s Ethical AI team.

The situation imperils both the rights of AI workers at those companies and the quality of research that is shared with the public, said Gebru, speaking at the recent EmTech MIT conference hosted by MIT Technology Review.

China is pulling ahead of global rivals when it comes to innovative AI “unicorns” that are pushing the technology forward. Research from GlobalData has found that — of the 45 international AI unicorns identified — China has the largest share with 19 based in the country.

Collectively, the Chinese AI unicorns are valued at $43.5 billion.

Beijing has been on a regulatory crackdown in recent months, especially on Chinese companies doing business in, and with, the US.

Robotaxi firm Didi, for example, was targeted by Chinese authorities following its $4.4 billion listing on the New York Stock Exchange (NYSE). Chinese regulators forced Apple to remove Didi from the App Store while other app stores operating in China have also been ordered not to serve Didi’s app.

Despite the crackdowns, AI development in China has remained strong.

It’s been a hot, hot year in the world of data, machine learning, and AI. Just when you thought it couldn’t grow any more explosively, the data/AI landscape just did: the rapid pace of company creation, exciting new product and project launches, a deluge of VC financings, unicorn creation, IPOs, etc.

It has also been a year of multiple threads and stories intertwining.

One story has been the maturation of the ecosystem, with market leaders reaching large scale and ramping up their ambitions for global market domination, in particular through increasingly broad product offerings. Some of those companies, such as Snowflake, have been thriving in public markets (see our MAD Public Company Index), and a number of others (Databricks, Dataiku, DataRobot, etc.) have raised very largely (or in the case of Databricks, gigantic) rounds at multi-billion valuations and are knocking on the IPO door (see our Emerging MAD company Index).