Toggle light / dark theme

Researchers find algorithm for large-scale brain simulations

An international group of researchers has made a decisive step towards creating the technology to achieve simulations of brain-scale networks on future supercomputers of the exascale class. The breakthrough, published in Frontiers in Neuroinformatics, allows larger parts of the human brain to be represented, using the same amount of computer memory. Simultaneously, the new algorithm significantly speeds up brain simulations on existing supercomputers.

The human brain is an organ of incredible complexity, composed of 100 billion interconnected nerve cells. However, even with the help of the most powerful supercomputers available, it is currently impossible to simulate the exchange of neuronal signals in networks of this size.

“Since 2014, our software can simulate about one percent of the in the human brain with all their connections,” says Markus Diesmann, Director at the Jülich Institute of Neuroscience and Medicine (INM-6). In order to achieve this impressive feat, the software requires the entire main memory of petascale supercomputers, such as the K computer in Kobe and JUQUEEN in Jülich.

Google’s new Bristlecone processor brings it one step closer to quantum supremacy

Every major tech company is looking at quantum computers as the next big breakthrough in computing. Teams at Google, Microsoft, Intel, IBM and various startups and academic labs are racing to become the first to achieve quantum supremacy — that is, the point where a quantum computer can run certain algorithms faster than a classical computer ever could. Today, Google said that it believes that Bristlecone, its latest quantum processor, will put it on a path to reach quantum supremacy in the future.

The purpose of Bristlecone, Google says, it to provide its researchers with a testbed “for research into system error rates and scalability of our qubit technology, as well as applications in quantum simulation, optimization, and machine learning.

Using big data analysis to significantly boost cancer treatment effectiveness

Summary: Treatability of cancer was raised to over 80% by a new intelligent system that sifts through massive genetic datasets to pinpoint targets for cancer treatment, say these scientists. [This article first appeared on LongevityFacts. Author: Brady Hartman. ]

Scientists in Singapore have discovered a significantly improved way to treat cancer by listening to many different computer programs rather than just one.

Their new computer program reaches a consensus on how to treat a specific tumor, and it is significantly more accurate than existing predictive methods. The system isolates the Achilles heel of each individual tumor, helping doctors to choose the best treatment.

Deep learning for biology

Finkbeiner’s success highlights how deep learning, one of the most promising branches of artificial intelligence (AI), is making inroads in biology. The algorithms are already infiltrating modern life in smartphones, smart speakers and self-driving cars. In biology, deep-learning algorithms dive into data in ways that humans can’t, detecting features that might otherwise be impossible to catch. Researchers are using the algorithms to classify cellular images, make genomic connections, advance drug discovery and even find links across different data types, from genomics and imaging to electronic medical records.


A popular artificial-intelligence method provides a powerful tool for surveying and classifying biological data. But for the uninitiated, the technology poses significant difficulties.

DeepMind’s latest AI transfers its learning to new tasks

By using insights from one job to help it do another, a successful new artificial intelligence hints at a more versatile future for machine learning.

Backstory: Most algorithms can be trained in only one domain, and can’t use what’s been learned for one task to perform another, new one. A big hope for AI is to have systems take insights from one setting and apply them elsewhere—what’s called transfer learning.

What’s new: DeepMind built a new AI system called IMPALA that simultaneously performs multiple tasks—in this case, playing 57 Atari games—and attempts to share learning between them. It showed signs of transferring what was learned from one game to another.

Getting to grips with military robotics

PETER SINGER, AN expert on future warfare at the New America think-tank, is in no doubt. “What we have is a series of technologies that change the game. They’re not science fiction. They raise new questions. What’s possible? What’s proper?” Mr Singer is talking about artificial intelligence, machine learning, robotics and big-data analytics. Together they will produce systems and weapons with varying degrees of autonomy, from being able to work under human supervision to “thinking” for themselves. The most decisive factor on the battlefield of the future may be the quality of each side’s algorithms. Combat may speed up so much that humans can no longer keep up.

Frank Hoffman, a fellow of the National Defence University who coined the term “hybrid warfare”, believes that these new technologies have the potential not just to change the character of war but even possibly its supposedly immutable nature as a contest of wills. For the first time, the human factors that have defined success in war, “will, fear, decision-making and even the human spark of genius, may be less evident,” he says.

Weapons with a limited degree of autonomy are not new. In 1943 Germany produced a torpedo with an acoustic homing device that helped it find its way to its target. Tomahawk cruise missiles, once fired, can adjust their course using a digital map of Earth’s contours. Anti-missile systems are pre-programmed to decide when to fire and engage an incoming target because the human brain cannot react fast enough.

A revolution in health care is coming

Will the benefits of making data more widely available outweigh such risks? The signs are that they will. Plenty of countries are now opening up their medical records, but few have gone as far as Sweden. It aims to give all its citizens electronic access to their medical records by 2020; over a third of Swedes have already set up accounts. Studies show that patients with such access have a better understanding of their illnesses, and that their treatment is more successful. Trials in America and Canada have produced not just happier patients but lower costs, as clinicians fielded fewer inquiries. That should be no surprise. No one has a greater interest in your health than you do. Trust in Doctor You.


NO WONDER they are called “patients”. When people enter the health-care systems of rich countries today, they know what they will get: prodding doctors, endless tests, baffling jargon, rising costs and, above all, long waits. Some stoicism will always be needed, because health care is complex and diligence matters. But frustration is boiling over. This week three of the biggest names in American business—Amazon, Berkshire Hathaway and JPMorgan Chase—announced a new venture to provide better, cheaper health care for their employees. A fundamental problem with today’s system is that patients lack knowledge and control. Access to data can bestow both.

The internet already enables patients to seek online consultations when and where it suits them. You can take over-the-counter tests to analyse your blood, sequence your genome and check on the bacteria in your gut. Yet radical change demands a shift in emphasis, from providers to patients and from doctors to data. That shift is happening. Technologies such as the smartphone allow people to monitor their own health. The possibilities multiply when you add the crucial missing ingredients—access to your own medical records and the ability easily to share information with those you trust. That allows you to reduce inefficiencies in your own treatment and also to provide data to help train medical algorithms. You can enhance your own care and everyone else’s, too.

Upgrade your inbox.