Toggle light / dark theme

The drug, known as DSP-1181, was created by using algorithms to sift through potential compounds, checking them against a huge database of parameters, including a patient’s genetic factors. Speaking to the BBC, Exscientia chief executive Professor Andrew Hopkins described the trials as a “key milestone in drug discovery” and noted that there are “billions” of decisions needed to find the right molecules for a drug, making their eventual creation a “huge decision.” With AI, however, “the beauty of the algorithm is that they are agnostic, so can be applied to any disease.”

We’ve already seen multiple examples of AI being used to diagnose illness and analyze patient data, so using it to engineer drug treatment is an obvious progression of its place in medicine. But the AI-created drugs do pose some pertinent questions. Will patients be comfortable taking medication designed by a machine? How will these drugs differ from those developed by humans alone? Who will make the rules for the use of AI in drug research? Hopkins and his team hope that these and myriad other questions will be explored in the trials, which will begin in March.

The hidden secret of artificial intelligence is that much of it is actually powered by humans. Well, to be specific, the supervised learning algorithms that have gained much of the attention recently are dependent on humans to provide well-labeled training data that can be used to train machine learning algorithms. Since machines have to first be taught, they can’t teach themselves (yet), so it falls upon the capabilities of humans to do this training. This is the secret achilles heel of AI: the need for humans to teach machines the things that they are not yet able to do on their own.

Machine learning is what powers today’s AI systems. Organizations are implementing one or more of the seven patterns of AI, including computer vision, natural language processing, predictive analytics, autonomous systems, pattern and anomaly detection, goal-driven systems, and hyperpersonalization across a wide range of applications. However, in order for these systems to be able to create accurate generalizations, these machine learning systems must be trained on data. The more advanced forms of machine learning, especially deep learning neural networks, require significant volumes of data to be able to create models with desired levels of accuracy. It goes without saying then, that the machine learning data needs to be clean, accurate, complete, and well-labeled so the resulting machine learning models are accurate. Whereas it has always been the case that garbage in is garbage out in computing, it is especially the case with regards to machine learning data.

According to analyst firm Cognilytica, over 80% of AI project time is spent preparing and labeling data for use in machine learning projects:

If you’re interested in mind uploading, then I have an excellent article to recommend. This wide-ranging article is focused on neuromorphic computing and has sections on memristors. Here is a key excerpt:

“…Perhaps the most exciting emerging AI hardware architectures are the analog crossbar approaches since they achieve parallelism, in-memory computing, and analog computing, as described previously. Among most of the AI hardware chips produced in roughly the last 15 years, an analog memristor crossbar-based chip is yet to hit the market, which we believe will be the next wave of technology to follow. Of course, incorporating all the primitives of neuromorphic computing will likely require hardware solutions even beyond analog memristor crossbars…”

Here’s a web link to the research paper:


Computers have undergone tremendous improvements in performance over the last 60 years, but those improvements have significantly slowed down over the last decade, owing to fundamental limits in the underlying computing primitives. However, the generation of data and demand for computing are increasing exponentially with time. Thus, there is a critical need to invent new computing primitives, both hardware and algorithms, to keep up with the computing demands. The brain is a natural computer that outperforms our best computers in solving certain problems, such as instantly identifying faces or understanding natural language. This realization has led to a flurry of research into neuromorphic or brain-inspired computing that has shown promise for enhanced computing capabilities. This review points to the important primitives of a brain-inspired computer that could drive another decade-long wave of computer engineering.

If you are interested in mind uploading, then I have a research paper for you to consider. One of the serious issues with mind uploading is the computer substrate. Simulating the brain will require a new and incredible computing capability. New techniques and new hardware are going to be required to make it practical. Of course, there is currently zero demand for mind uploading hardware, so the market is not going to provide this capability. However, there is incredible market demand for cutting edge hardware for machine learning and artificial intelligence. And it turns out that one potential technique for artificial intelligence simulates the way that the brain works: neuromorphic computing. And there is a relatively new type of electronic component that seems to mimic some of the functions of a brain’s neuron: the memristor. Memristors are relatively new, having only been fabricated for the first time by HP in 2008. So I am trying to keep up with the latest developments in memristive technology.

Here are some excerpts from the paper:

“…Artificial Neural Network (ANN) algorithms offer fast computations by mimicking the neuronal network of brains. A weight matrix is used in neural networks (NNs) for parallel processing that makes computing faster…The memristor has attracted much attention because of its potential to have linear multilevel conductance states for vector-matrix multiplication (output = weight × input), corresponding to parallel processing…”

Here is a web link to the research paper:


Neuromorphic computation is one of the axes of parallel distributed processing, and memristor-based synaptic weight is considered as a key component of this type of computation. However, the material properties of memristors, including material related physics, are not yet matured. In parallel with memristors, CMOS based Graphics Processing Unit, Field Programmable Gate Array, and Application Specific Integrated Circuit are also being developed as dedicated artificial intelligence (AI) chips for fast computation. Therefore, it is necessary to analyze the competitiveness of the memristor-based neuromorphic device in order to position the memristor in the appropriate position of the future AI ecosystem. In this article, the status of memristor-based neuromorphic computation was analyzed on the basis of papers and patents to identify the competitiveness of the memristor properties by reviewing industrial trends and academic pursuits. In addition, material issues and challenges are discussed for implementing the memristor-based neural processor.

The world’s first programming language based on classical Chinese is only about a month old, and volunteers have already written dozens of programs with it, such as one based on an ancient Chinese fortune-telling algorithm.

The new language’s developer, Lingdong Huang, previously designed an infinite computer-generated Chinese landscape painting. He also helped create the first and so far only AI-generated Chinese opera. He graduated with a degree in computer science and art from Carnegie Mellon University in December.

After coming up with the idea for the new language, wenyan-lang, roughly a year ago, Huang finished the core of the language during his last month at school. It includes a renderer that can display a program in a manner that resembles pages from ancient Chinese texts.

Well, it’s a good thing, but not what I was hoping for. 3 gene therapies though Church is otherwise testing 45. But this is not the rejuvenation I was getting optimistic about. Still, I’m sure as I am getting older that I will be grateful when a treatment comes my way for something when I am elderly. But frankly this was overhyped from the start and I was part of that equation spreading a “2025” figure for some time.


Gene Therapy.

An ‘anti-aging’ gene therapy trial in dogs begins, and Rejuvenate Bio hopes humans will be next.

The startup, spun out of George Church’s lab, has tested an experimental therapy that treats four age-related diseases in mice.

by.

YouTube’s “next video” is a profit-maximizing recommendation system, an A.I. selecting increasingly ‘engaging’ videos. And that’s the problem.

“Computer scientists and users began noticing that YouTube’s algorithm seemed to achieve its goal by recommending increasingly extreme and conspiratorial content. One researcher reported that after she viewed footage of Donald Trump campaign rallies, YouTube next offered her videos featuring “white supremacist rants, Holocaust denials and other disturbing content.” The algorithm’s upping-the-ante approach went beyond politics, she said: “Videos about vegetarianism led to videos about veganism. Videos about jogging led to videos about running ultramarathons.” As a result, research suggests, YouTube’s algorithm has been helping to polarize and radicalize people and spread misinformation, just to keep us watching.”


By teaching machines to understand our true desires, one scientist hopes to avoid the potentially disastrous consequences of having them do what we command.

Inspired by the functioning of the human brain and based on a biological mechanism called neuromodulation, it allows intelligent agents to adapt to unknown situations.

Artificial Intelligence (AI) has enabled the development of high-performance automatic learning techniques in recent years. However, these techniques are often applied task by task, which implies that an intelligent agent trained for one task will perform poorly on other tasks, even very similar ones. To overcome this problem, researchers at the University of Liège (ULiège) have developed a based on a called . This algorithm makes it possible to create intelligent agents capable of performing tasks not encountered during training. This novel and exceptional result is presented this week in the magazine PLOS ONE.

Despite the immense progress in the field of AI in recent years, we are still very far from . Indeed, if current AI techniques allow to train computer agents to perform certain tasks better than humans when they are trained specifically for them, the performance of these same agents is often very disappointing when they are put in conditions (even slightly) different from those experienced during training.

Not everything is knowable. In a world where it seems like artificial intelligence and machine learning can figure out just about anything, that might seem like heresy – but it’s true.

At least, that’s the case according to a new international study by a team of mathematicians and AI researchers, who discovered that despite the seemingly boundless potential of machine learning, even the cleverest algorithms are nonetheless bound by the constraints of mathematics.

“The advantages of mathematics, however, sometimes come with a cost… in a nutshell… not everything is provable,” the researchers, led by first author and computer scientist Shai Ben-David from the University of Waterloo, write in their paper.