Toggle light / dark theme

The AI revolution has spawned a new chips arms race

There’s no x86 in the AI chip market yet—” People see a gold rush; there’s no doubt.”

A lot has changed since 1918. But whether it’s a literal (like the City of London School athletics’ U12 event) or figurative (AI chip development) race, participants still very much want to win.

For years, the semiconductor world seemed to have settled into a quiet balance: Intel vanquished virtually all of the RISC processors in the server world, save IBM’s POWER line. Elsewhere AMD had self-destructed, making it pretty much an x86 world. And Nvidia, a late starter in the GPU space, previously mowed down all of it many competitors in the 1990s. Suddenly only ATI, now a part of AMD, remained. It boasted just half of Nvidia’s prior market share.

How to predict the side effects of millions of drug combinations

An example graph of polypharmacy side effects derived from genomic and patient population data, protein–protein interactions, drug–protein targets, and drug–drug interactions encoded by 964 different polypharmacy side effects. The graph representation is used to develop Decagon. (credit: Marinka Zitnik et al./Bioinformatics)

Millions of people take up to five or more medications a day, but doctors have no idea what side effects might arise from adding another drug.*

Now, Stanford University computer scientists have developed a deep-learning system (a kind of AI modeled after the brain) called Decagon** that could help doctors make better decisions about which drugs to prescribe. It could also help researchers find better combinations of drugs to treat complex diseases.

New AI method increases the power of artificial neural networks

An international team of scientists from Eindhoven University of Technology, University of Texas at Austin, and University of Derby, has developed a revolutionary method that quadratically accelerates artificial intelligence (AI) training algorithms. This gives full AI capability to inexpensive computers, and would make it possible in one to two years for supercomputers to utilize Artificial Neural Networks that quadratically exceed the possibilities of today’s artificial neural networks. The scientists presented their method on June 19 in the journal Nature Communications.

Artificial Neural Networks (or ANN) are at the very heart of the AI revolution that is shaping every aspect of society and technology. But the ANNs that we have been able to handle so far are nowhere near solving very complex problems. The very latest supercomputers would struggle with a 16 million-neuron network (just about the size of a frog brain), while it would take over a dozen days for a powerful desktop computer to train a mere 100,000-neuron network.

Caltech’s new machine learning algorithm predicts IQ from fMRI

Scientists at the California Institute of Technology can now assess a person’s intelligence in moments with nothing more than a brain scan and an AI algorithm, university officials announced this summer.

Caltech researchers led by Ralph Adolphs, PhD, a professor of psychology, neuroscience and biology and chair of the Caltech Brain Imaging Center, said in a recent study that they, alongside colleagues at Cedars-Sinai Medical Center and the University of Salerno, were successfully able to predict IQ in hundreds of patients from fMRI scans of resting-state brain activity. The work is pending publication in the journal Philosophical Transactions of the Royal Society.

Adolphs and his team collected data from nearly 900 men and women for their research, all of whom were part of the National Institutes of Health (NIH)-driven Human Connectome Project. The researchers trained their machine learning algorithm on the complexities of the human brain by feeding the brain scans and intelligence scores of these hundreds of patients into the algorithm—something that took very little effort on the patients’ end.

/* */