Toggle light / dark theme

AI and Machine Learning systems have proven a boon to scientific research in a variety of academic fields in recent years. They’ve assisted scientists in ripe for cutting-edge treatments, of potent and, and even. Throughout this period, however, AI/ML systems have often been relegated to simply processing large data sets and performing brute force computations, not leading the research themselves.

But Dr. Hiroaki Kitano, CEO of Sony Computer Science Laboratories, “hybrid form of science that shall bring systems biology and other sciences into the next stage,” by creating an AI that’s just as capable as today’s top scientific minds. To do so, Kitano seeks to launch the and.

“The distinct characteristic of this challenge is to field the system into an open-ended domain to explore significant discoveries rather than rediscovering what we already know or trying to mimic speculated human thought processes,” Kitano. “The vision is to reformulate scientific discovery itself and to create an alternative form of scientific discovery.”

Hurray.


Tesla has started to hire roboticists to build its recently announced “Tesla Bot,” a humanoid robot to become a new vehicle for its AI technology.

When Elon Musk explained the rationale behind Tesla Bot, he argued that Tesla was already making most of the components needed to create a humanoid robot equipped with artificial intelligence.

The automaker’s computer vision system developed for self-driving cars could be leveraged for use in the robot, which could also use things like Tesla’s battery system and suite of sensors.

A Tesla semi-truck with a very Tesla-worthy aesthetics highlighted by the contoured yet sharp design language that in a way reminds me of the iPhone 12!

Tesla’s visionary Semi all-electric truck powered by four independent motors on the rear is scheduled for production in 2022. The semi is touted to be the safest, most comfortable truck with an acceleration of 0–60 mph in just 20 seconds and a range of 300–500 miles. While the prototype version looks absolutely badass, how the final version will look is anybody’s guess.

Proteins are essential to life, and understanding their 3D structure is key to unpicking their function. To date, only 17% of the human proteome is covered by an experimentally determined structure. Two papers in this week’s issue dramatically expand our structural understanding of proteins. Researchers at DeepMind, Google’s London-based sister company, present the latest version of their AlphaFold neural network. Using an entirely new architecture informed by intuitions about protein physics and geometry, it makes highly accurate structure predictions, and was recognized at the 14th Critical Assessment of Techniques for Protein Structure Prediction last December as a solution to the long-standing problem of protein-structure prediction. The team applied AlphaFold to 20,296 proteins, representing 98.5% of the human proteome.

Determining the 3D shapes of biological molecules is one of the hardest problems in modern biology and medical discovery. Companies and research institutions often spend millions of dollars to determine a molecular structure—and even such massive efforts are frequently unsuccessful.

Using clever, new machine learning techniques, Stanford University Ph.D. students Stephan Eismann and Raphael Townshend, under the guidance of Ron Dror, associate professor of computer science, have developed an approach that overcomes this problem by predicting accurate structures computationally.

Most notably, their approach succeeds even when learning from only a few known structures, making it applicable to the types of whose structures are most difficult to determine experimentally.

Israel-based AI healthtech company, DiA Imaging Analysis, which is using deep learning and machine learning to automate analysis of ultrasound scans, has closed a $14 million Series B round of funding.

Backers in the growth round, which comes three years after DiA last raised, include new investors Alchimia Ventures, Downing Ventures, ICON Fund, Philips and XTX Ventures — with existing investors also participating, including CE Ventures, Connecticut Innovations, Defta Partners, Mindset Ventures, and Dr Shmuel Cabilly. In total, it has taken in $25 million to date.

The latest financing will allow DiA to continue expanding its product range and go after new and expanded partnerships with ultrasound vendors, PACS/Healthcare IT companies, resellers and distributors while continuing to build out its presence across three regional markets.

A Norwegian company called Yara International claims to have created the world’s first zero-emission ship that can also transport cargo autonomously. The Yara Birkeland electric cargo ship was first conceptualized in2017but now looks to make its first voyage with no crew members onboard later this year in Norway.

Yara International is a Norwegian company that was founded in1905to combat the rising famine in Europe at the time. The company created the world’s first nitrogen fertilizer, which remains its largest business focus today.

In addition to its perpetual battle against hunger, Yara focuses on emissions abatement and sustainable agricultural practices. While the company wants to continue finding success in feeding the planet, it believes it can also do so sustainably.

New research has found that artificial intelligence (AI) analyzing medical scans can identify the race of patients with an astonishing degree of accuracy, while their human counterparts cannot. With the Food and Drug Administration (FDA) approving more algorithms for medical use, the researchers are concerned that AI could end up perpetuating racial biases. They are especially concerned that they could not figure out precisely how the machine-learning models were able to identify race, even from heavily corrupted and low-resolution images.

In the study, published on pre-print service Arxiv, an international team of doctors investigated how deep learning models can detect race from medical images. Using private and public chest scans and self-reported data on race and ethnicity, they first assessed how accurate the algorithms were, before investigating the mechanism.

“We hypothesized that if the model was able to identify a patient’s race, this would suggest the models had implicitly learned to recognize racial information despite not being directly trained for that task,” the team wrote in their research.