Toggle light / dark theme

Summary: Combining artificial intelligence, mathematical modeling, and brain imaging data, researchers shed light on the neural processes that occur when people use mental abstraction.

Source: UCL

By using a combination of mathematical modeling, machine learning and brain imaging technology, researchers have discovered what happens in the brain when people use mental abstractions.

And here is a really smart person talking about brain interfaces.


In this talk we introduce Shivon Zilis, Project Director at Neuralink, to her team’s work and the ethical impacts of implanting technology into the brain. We learn about the latest developments in Neuralink’s technology and the goals the team keeps in mind when designing and implementing technology that could change the way humans interact and understand technology.

Visit https://cucai.ca/ to learn more.

Dear Members You can publish your articles on the following topics free of charge on our website. Artificial intelligence Machine Learning Deep Learning Natural Language Processing Data Scientist SQL Python Pandas Data Visualization (Tableau, Seaborn, Matplotlib, etc.) You can contact the website via e-mail. ([email protected]


Artificial intelligence, deep learning, machine learning, brain, brain diseases, AI lectures, AI conferences, AI TED talks, mind and brain, AI movies, AI books in english and turkish.

The idea is to offer the predictions for the structure of practically every protein with a known sequence of amino acids free of charge. “We believe that this is the most important contribution to date that artificial intelligence has contributed to scientific knowledge,” he said following the publication of DeepMind’s research in the medical journal Nature.


DeepMind, a company bought by Google, predicts with unprecedented precision the 3D structure of nearly all the proteins made by the human body.

Rick Osterloh casually dropped his laptop onto the couch and leaned back, satisfied. It’s not a mic, but the effect is about the same. Google’s chief of hardware had just shown me a demo of the company’s latest feature: computational processing for video that will debut on the Pixel 6 and Pixel 6 Pro. The feature was only possible with Google’s own mobile processor, which it’s announcing today.

He’s understandably proud and excited to share the news. The chip is called Tensor, and it’s the first system-on-chip (SoC) designed by Google. The company has “been at this about five years,” he said, though CEO Sundar Pichai wrote in a statement that Tensor “has been four years in the making and builds off of two decades of Google’s computing experience.”

That software expertise is something Google has come to be known for. It led the way in computational photography with its Night Sight mode for low light shots, and weirded out the world with how successfully its conversational AI Duplex was able to mimic human speech — right down to the “ums and ahs.” Tensor both leverages Google’s machine learning prowess and enables the company to bring AI experiences to smartphones that it couldn’t before.