Toggle light / dark theme

Artificial neural networks are famously inspired by their biological counterparts. Yet compared to human brains, these algorithms are highly simplified, even “cartoonish.”

Can they teach us anything about how the brain works?

For a panel at the Society for Neuroscience annual meeting this month, the answer is yes. Deep learning wasn’t meant to model the brain. In fact, it contains elements that are biologically improbable, if not utterly impossible. But that’s not the point, argues the panel. By studying how deep learning algorithms perform, we can distill high-level theories for the brain’s processes—inspirations to be further tested in the lab.

Most human diseases can be traced to malfunctioning parts of a cell—a tumor is able to grow because a gene wasn’t accurately translated into a particular protein or a metabolic disease arises because mitochondria aren’t firing properly, for example. But to understand what parts of a cell can go wrong in a disease, scientists first need to have a complete list of parts.

By combining microscopy, biochemistry techniques and , researchers at University of California San Diego School of Medicine and collaborators have taken what they think may turn out to be a significant leap forward in the understanding of human cells.

The technique, known as Multi-Scale Integrated Cell (MuSIC), is described November 24, 2021 in Nature.

Sooo… the inimitable Russell Brand posted a video a few weeks ago saying some amusing but largeuly inaccurate and misleading things about the Grace humanoid eldercare robot we’re making in our Awakening Health project (http://awakening.health).

Russell’s video is here: https://www.youtube.com/watch?v=SDD7M1OWBDg.

I recorded this video as a sort of response, to set the record straight a bit and explain why Russell is wrong about Grace and what is the actual nature of the Awakening Health project and what are the motivations behind it!

The three main points I make in the video (but with more color and detail, so do watch the video if you’re interested!!) are:

Research led by UT Southwestern and the University of Washington could lead to a wealth of drug targets.

UT Southwestern and University of Washington researchers led an international team that used artificial intelligence (AI) and evolutionary analysis to produce 3D models of eukaryotic protein interactions. The study, published in Science, identified more than 100 probable protein complexes for the first time and provided structural models for more than 700 previously uncharacterized ones. Insights into the ways pairs or groups of proteins fit together to carry out cellular processes could lead to a wealth of new drug targets.

“Our results represent a significant advance in the new era in structural biology in which computation plays a fundamental role,” said Qian Cong, Ph.D., Assistant Professor in the Eugene McDermott Center for Human Growth and Development with a secondary appointment in Biophysics.

Tesla has posted new jobs for its Tesla Bot team on its Careers page. Most of the Tesla Bot jobs are located in California except one located in Austin, Texas.

A few of the openings have been posted for quite some time. Tesla has been steadily posting jobs for the Tesla Bot team since the project was announced during Artificial Intelligence or AI Day back in August. Most of the new jobs seem to be related to software development for the Tesla Bot, hinting at the company’s progress with the humanoid robot.

The new Tesla Bot jobs are listed below with their responsibilities.

One of China’s biggest AI solution providers SenseTime is a step closer to its initial public offering. SenseTime has received regulatory approval to list on the Hong Kong Stock Exchange, according to media reports. Founded in 2014, SenseTime was christened as one of China’s four “AI Dragons” alongside Megvii, CloudWalk, and Yitu. In the second half of the 2010s, their algorithms found much demand from businesses and governments hoping to turn real-life data into actionable insights. Cameras embedded with their AI models watch city streets 24 hours. Malls use their sensing solutions to track and predict crowds on the premises.

SenseTime’s three rivals have all mulled plans to sell shares either in mainland China or Hong Kong. Megvii is preparing to list on China’s Nasdaq-style STAR board after its HKEX application lapsed.

The window for China’s data-rich tech firms to list overseas has narrowed. Beijing is making it harder for companies with sensitive data to go public outside China. And regulators in the West are wary of facial recognition companies that could aid mass surveillance.

But in the past few years, China’s AI upstarts were sought after by investors all over the world. In 2018 alone, SenseTime racked up more than $2 billion in investment. To date, the company has raised a staggering $5.2 billion in funding through 12 rounds. Its biggest outside shareholders include SoftBank Vision Fund and Alibaba’s Taobao. For its flotation in Hong Kong, SenseTime plans to raise up to $2 billion, according to Reuters.

Full Story:

Rarely does scientific software spark such sensational headlines. “One of biology’s biggest mysteries ‘largely solved’ by AI”, declared the BBC. Forbes called it “the most important achievement in AI — ever”. The buzz over the November 2020 debut of AlphaFold2, Google DeepMind’s (AI) system for predicting the 3D structure of proteins, has only intensified since the tool was made freely available in July.

The excitement relates to the software’s potential to solve one of biology’s thorniest problems — predicting the functional, folded structure of a protein molecule from its linear amino-acid sequence, right down to the position of each atom in 3D space. The underlying physicochemical rules for how proteins form their 3D structures remain too complicated for humans to parse, so this ‘protein-folding problem’ has remained unsolved for decades.

Researchers have worked out the structures of around 160,000 proteins from all kingdoms of life. They have been using experimental techniques, such as X-ray crystallography and cryo-electron microscopy (cryo-EM), and then depositing their 3D information in the Protein Data Bank. Computational biologists have made steady gains in developing software that complements these methods, and have correctly predicted the 3D shapes of some molecules from well-studied protein families.