Toggle light / dark theme

New study investigates photonics for artificial intelligence and neuromorphic computing

Scientists have given a fascinating new insight into the next steps to develop fast, energy-efficient, future computing systems that use light instead of electrons to process and store information—incorporating hardware inspired directly by the functioning of the human brain.

A team of scientists, including Professor C. David Wright from the University of Exeter, has explored the future potential for computer systems by using photonics in place of conventional electronics.

The article is published today (January 29th 2021) in the prestigious journal Nature Photonics.

‘Nanomagnetic’ computing can provide low-energy AI, researchers show

Researchers have shown it is possible to perform artificial intelligence using tiny nanomagnets that interact like neurons in the brain.

The new method, developed by a team led by Imperial College London researchers, could slash the of (AI), which is currently doubling globally every 3.5 months.

In a paper published today in Nature Nanotechnology, the international team have produced the first proof that networks of nanomagnets can be used to perform AI-like processing. The researchers showed nanomagnets can be used for ‘time-series prediction’ tasks, such as predicting and regulating insulin levels in .

Drone swarms can now fly autonomously through thick forest

A swarm of 10 bright blue drones lifts off in a bamboo forest in China, then swerves its way between cluttered branches, bushes and over uneven ground as it autonomously navigates the best flight path through the woods.

The experiment, led by scientists at Zhejiang University, evokes scenes from —and the authors in fact cite films such as “Star Wars,” “Prometheus” and “Blade Runner 2049” in the opening of their paper published Wednesday in the journal Science Robotics.

“Here, we take a step forward (to) such a future,” wrote the team, led by Xin Zhou.

30 years after Intelsat VI rescue, Northrop Grumman aims to make in-space servicing a permanent reality

On 7 May 1992, Space Shuttle Endeavour lifted off on her first voyage at 23:40 UTC from Pad-B at the Kennedy Space Center in Florida. Her target: Intelsat VI F-3 (now known as Intelsat 603). The goal: rendezvous with, repair, and re-release the satellite.

In the now-30 years since that mission, on-orbit satellite repair and servicing have largely languished — save for the five Hubble servicing missions Endeavour and the Shuttle fleet would conduct after STS-49.

Northrop Grumman now aims to change that in 2024 when their new Mission Robotic Vehicle and Mission Extension Pods begin launching to perform on-orbit satellite servicing and repairs.

Research Links Investment In Automation To Rising Mortality Rates

“We provide a lot of evidence to bolster the case that this is a causal relationship, and it is driven by precisely the industries that are most affected by aging and have opportunities for automating work,”

“For decades, manufacturers in the United States have turned to automation to remain competitive in a global marketplace, but this technological innovation has reduced the number of quality jobs available to adults without a college degree—a group that has faced increased mortality in recent years,”

Full Story:


In a recent article, I examined research from MIT that showed how investment in technologies, such as robotics, is often made to compensate for an aging workforce.

“Demographic change—aging—is one of the most important factors leading to the adoption of robotics and other automation technologies,” the researchers explain.

Indeed, when it comes to the adoption of technologies, such as robotics, the authors argue that the demographics of the population account for up to 35% of the variation between countries. What’s more, a similar phenomenon appears to be occurring within countries too, with metro areas in the United States that are aging faster, adopting automation technologies faster than areas that are aging more slowly.

Blood Test Analysis: Italian Centenarians

Join us on Patreon!
https://www.patreon.com/MichaelLustgartenPhD

Biomarker timestamps:
Glucose 1:37
HDL 2:43
Triglycerides 4:10
RBCs, Hemoglobin 5:29
Platelets 7:16
Uric Acid 8:37
AST, ALT 11:04
Total Cholesterol 13:55
WBCs 15:47
Total Protein, Albumin, Globulin 17:38
Creatinine 21:27
BUN 22:35

Papers referenced in the video:
Laboratory parameters in centenarians of Italian ancestry.
https://pubmed.ncbi.nlm.nih.gov/17681733/

Risk Factors For Hyperuricemia In Chinese Centenarians And Near-Centenarians.
https://pubmed.ncbi.nlm.nih.gov/31908434/

Fasting glucose level and all-cause or cause-specific mortality in Korean adults: a nationwide cohort study.
https://pubmed.ncbi.nlm.nih.gov/32623847/

High-density lipoprotein cholesterol and all-cause mortality by sex and age: a prospective cohort study among 15.8 million adults.

Hyundai has kicked-off production on novel 4×4 vehicles with robotic legs

They may be the only cars to go more viral than Cybertruck.

Remember that Hyundai concept 4×4 with robotic legs?

The South Korean automotive giant actually means to make it a reality and it has announced a development and test facility in Montana for that very purpose.

Hyundai has a planned investment goal of $20 million over the next five years for its New Horizons Studio, which will employ 50 people.

It will be located in Montana State University’s Innovation Campus in Bozeman, Montana and it will be a unit focused on the development of Ultimate Mobility Vehicles\.


Using Optomemristors To Light Up Artificial Neural Networks

Artificial intelligence and machine learning hardware research have concentrated on building photonic synapses and neurons and combining them to do fundamental forms of neural-type processing. However, complex processing methods found in human brains—such as reinforcement learning and dendritic computation—are more challenging to replicate directly in hardware.

A new study contributes to closing the “hardware gap” by creating an “Optomemristor” device that responds to numerous electronic and photonic inputs at the same time. The diverse biophysical mechanisms that govern the functions of the brain’s neurons and synapses allow for complex learning and processing in the mammalian brain.

The chalcogenide thin-film technology interacts with both light and electrical impulses to mimic multifactor biological computations in mammalian brains while spending very little energy.

Meta wants to improve its AI

Would start with scanning and reverse engineering brains of rats, crows, pigs, chimps, and end on the human brain. Aim for completion by 12/31/2025. Set up teams to run brain scans 24÷7÷365 if we need to, and partner w/ every major neuroscience lab in the world.


If artificial intelligence is intended to resemble a brain, with networks of artificial neurons substituting for real cells, then what would happen if you compared the activities in deep learning algorithms to those in a human brain? Last week, researchers from Meta AI announced that they would be partnering with neuroimaging center Neurospin (CEA) and INRIA to try to do just that.

Through this collaboration, they’re planning to analyze human brain activity and deep learning algorithms trained on language or speech tasks in response to the same written or spoken texts. In theory, it could decode both how human brains —and artificial brains—find meaning in language.

By comparing scans of human brains while a person is actively reading, speaking, or listening with deep learning algorithms given the same set of words and sentences to decipher, researchers hope to find similarities as well as key structural and behavioral differences between brain biology and artificial networks. The research could help explain why humans process language much more efficiently than machines.

Computers ace IQ tests but still make dumb mistakes. Can different tests help?

AI researchers are creating novel “benchmarks” to help models avoid real-world stumbles.


Trained on billions of words from books, news articles, and Wikipedia, artificial intelligence (AI) language models can produce uncannily human prose. They can generate tweets, summarize emails, and translate dozens of languages. They can even write tolerable poetry. And like overachieving students, they quickly master the tests, called benchmarks, that computer scientists devise for them.

That was Sam Bowman’s sobering experience when he and his colleagues created a tough new benchmark for language models called GLUE (General Language Understanding Evaluation). GLUE gives AI models the chance to train on data sets containing thousands of sentences and confronts them with nine tasks, such as deciding whether a test sentence is grammatical, assessing its sentiment, or judging whether one sentence logically entails another. After completing the tasks, each model is given an average score.

At first, Bowman, a computer scientist at New York University, thought he had stumped the models. The best ones scored less than 70 out of 100 points (a D+). But in less than 1 year, new and better models were scoring close to 90, outperforming humans. “We were really surprised with the surge,” Bowman says. So in 2019 the researchers made the benchmark even harder, calling it SuperGLUE. Some of the tasks required the AI models to answer reading comprehension questions after digesting not just sentences, but paragraphs drawn from Wikipedia or news sites. Again, humans had an initial 20-point lead. “It wasn’t that shocking what happened next,” Bowman says. By early 2021, computers were again beating people.

/* */