Toggle light / dark theme

Wednesday, March 1st Baltimore, MD — In March 2016 Insilico Medicine initiated a research collaboration with Life Extension to apply advanced bioinformatic methods and deep learning algorithms to screen for naturally occurring compounds that may slow down or even reverse the cellular and molecular mechanisms of aging. Today Life Extension (LE) launched a new line of nutraceuticals called GEROPROTECTTM, and the first product in the series called Ageless CellTM combines some of the natural compounds that were shortlisted by Insilico Medicine’s algorithms and are generally recognized as safe (GRAS).

The first research results on human biomarkers of aging and the product will be presented at the Re-Work Deep Learning in Healthcare Summit in London 28.02−01.03, 2017, one of the popular multidisciplinary conferences focusing on the emerging area of deep learning and machine intelligence.

“We salute Life Extension on the launch of GEROPROTECTTM: Ageless Cell, the first combination of nutraceuticals developed using our artificial intelligence algorithms. We share the common passion for extending human productive longevity and investing every quantum of our energy and resources to identify novel ways to prevent age-related decline and diseases. Partnering with Life Extension has multiple advantages. LE has spent the past 37 years educating consumers on the latest in nutritional therapies for optimal health and anti-aging and is an industry leader and a premium brand in the supplement industry. Also, LE also has a unique mail order blood test service that allows US customers to perform comprehensive blood tests to help identify potential health concerns and to track the effects of the nutraceutical products,” said Alex Zhavoronkov, PhD, CEO of Insilico Medicine, Inc.

Read more

Algorithms with learning abilities collect personal data that are then used without users’ consent and even without their knowledge; autonomous weapons are under discussion in the United Nations; robots stimulating emotions are deployed with vulnerable people; research projects are funded to develop humanoid robots; and artificial intelligence-based systems are used to evaluate people. One can consider these examples of AI and autonomous systems (AS) as great achievements or claim that they are endangering human freedom and dignity.

We need to make sure that these technologies are aligned to humans in terms of our moral values and ethical principles to fully benefit from the potential of them. AI and AS have to behave in a way that is beneficial to people beyond reaching functional goals and addressing technical problems. This will allow for an elevated level of trust for technology that is needed for a fruitful pervasive use of AI/AS in our daily lives.

Read more

In the popular TV show Sherlock, visual depictions of our hero’s deductive reasoning often look like machine algorithms. And probably not by accident, given that this version of Conan Doyle’s detective processes tremendous amounts of observed data—the sort of minutiae that the average person tends to pass over or forget—more like a computer than a human.

Sherlock’s intelligence is both strength and limitation. His way of thinking is often bounded by an inability to intuitively understand social and emotional contexts. The show’s central premise is that Sherlock Holmes needs his friend John Watson to help him synthesize empirical data into human truth.

In Sherlock we see the analog for modern AI: highly performant learning machines that can achieve metacognitive results with the assistance of fully cognitive human partners. Machine intelligence does not by its nature make human intelligence obsolete. Quite the opposite, really—machines need human guidance.

Read more

Are robots coming for your job?

Although technology has long affected the labor force, recent advances in and robotics are heightening concerns about automation replacing a growing number of occupations, including highly skilled or “knowledge-based” .

Just a few examples: self-driving technology may eliminate the need for taxi, Uber and truck drivers, algorithms are playing a growing role in journalism, robots are informing consumers as mall greeters, and medicine is adapting robotic surgery and artificial intelligence to detect cancer and heart conditions.

Read more

Historian Yuval Noah Harari makes a bracing prediction: just as mass industrialization created the working class, the AI revolution will create a new unworking class.

The most important question in 21st-century economics may well be: What should we do with all the superfluous people, once we have highly intelligent non-conscious algorithms that can do almost everything better than humans?

This is not an entirely new question. People have long feared that mechanization might cause mass unemployment. This never happened, because as old professions became obsolete, new professions evolved, and there was always something humans could do better than machines. Yet this is not a law of nature, and nothing guarantees it will continue to be like that in the future. The idea that humans will always have a unique ability beyond the reach of non-conscious algorithms is just wishful thinking. The current scientific answer to this pipe dream can be summarized in three simple principles:

Read more

(Phys.org)—Scientists have built tiny logic machines out of single atoms that operate completely differently than conventional logic devices do. Instead of relying on the binary switching paradigm like that used by transistors in today’s computers, the new nanoscale logic machines physically simulate the problems and take advantage of the inherent randomness that governs the behavior of physical systems at the nanoscale—randomness that is usually considered a drawback.

The team of researchers, Barbara Fresch et al., from universities in Belgium, Italy, Australia, Israel, and the US, have published a paper on the new nanoscale logic machines in a recent issue of Nano Letters.

“Our approach shows the possibility of a new class of tiny analog computers that can solve computationally difficult problems by simple statistical algorithms running in nanoscale solid-state physical devices,” coauthor Francoise Remacle at the University of Liege told Phys.org.

Read more

Right now it’s easiest to think about an artificial intelligence algorithm as a specific tool, like a hammer. A hammer is really good at hitting things, but when you need a saw to cut something in half, it’s back to the toolbox. Need a face recognized? Train an facial recognition algorithm, but don’t ask it to recognize cows.

Alphabet’s AI research arm, DeepMind, is trying to change that idea with a new algorithm that can learn more than one skill. Having algorithms that can learn multiple skills could make it far easier to add new languages to translators, remove bias from image recognition systems, or even have algorithms use existing knowledge to solve new complex problems. The research published in Proceedings of the National Academy of Sciences this week is preliminary, as it only tests the algorithm on playing different Atari games, but this research shows multi-purpose algorithms are actually possible.

The problem DeepMind’s research tackles is called “catastrophic forgetting,” the company writes. If you train an algorithm to recognize faces and then try to train it again to recognize cows, it will forget faces to make room for all the cow-knowledge. Modern artificial neural networks use millions of mathematic equations to calculate patterns in data, which could be the pixels that make a face or the series of words that make a sentence. These equations are connected in various ways, and are so dependent on some equations that they’ll begin to fail when even slightly tweaked for a different task. DeepMind’s new algorithm identifies and protects the equations most important for carrying out the original task, while letting the less-important ones be overwritten.

Read more