Toggle light / dark theme

Previous Episode on Creative A.i.:

In this episode we take a look at Tesla’s “battery day” and Elon Musk’s plans for a $25,000 car.

— About ColdFusion –
ColdFusion is an Australian based online media company independently run by Dagogo Altraide since 2009. Topics cover anything in science, technology, history and business in a calm and relaxed environment.

If you enjoy my content, please consider subscribing!

David Sinclair wants to slow down and ultimately reverse aging. Sinclair sees aging as a disease and he is convinced aging is caused by epigenetic changes, abnormalities that occur when the body’s cells process extra or missing pieces of DNA. This results in the loss of the information that keeps our cells healthy. This information also tells the cells which genes to read. David Sinclair’s book: “Lifespan, why we age and why we don’t have to”, he describes the results of his research, theories and scientific philosophy as well as the potential consequences of the significant progress in genetic technologies.

At present, researchers are only just beginning to understand the biological basis of aging even in relatively simple and short-lived organisms such as yeast. Sinclair however, makes a convincing argument for why the life-extension technologies will eventually offer possibilities of life prolongation using genetic engineering.

He and his team recently developed two artificial intelligence algorithms that predict biological age in mice and when they will die. This will pave the way for similar machine learning models in people.
The loss of epigenetic information is likely the root cause of aging. By analogy, If DNA is the digital information on a compact disc, then aging is due to scratches. What we are searching for, is the polish.

Every time a cell divides, the DNA strands at the ends of your chromosomes replicate in order to copy all the genetic information to each new cell, and this process is not perfect. Over time, however, the ends of your chromosomes can become scrambled.

These are just some of the important applications bio-inspired robots could be used for, and that’s why roboticists at the worldwide major robotics labs are dedicated to exploring the class Insecta.


The question isn’t only how big and powerful we can make a machine, but how small and savvy. What might humans be capable of if we could command a tiny army of simple machines? How could we use robots that could fly, skim across the water, hop to the ceiling and even swarm?

The explosive successes of AI in the last decade or so are typically chalked up to lots of data and lots of computing power. But benchmarks also play a crucial role in driving progress—tests that researchers can pit their AI against to see how advanced it is. For example, ImageNet, a public data set of 14 million images, sets a target for image recognition. MNIST did the same for handwriting recognition and GLUE (General Language Understanding Evaluation) for natural-language processing, leading to breakthrough language models like GPT-3.

A fixed target soon gets overtaken. ImageNet is being updated and GLUE has been replaced by SuperGLUE, a set of harder linguistic tasks. Still, sooner or later researchers will report that their AI has reached superhuman levels, outperforming people in this or that challenge. And that’s a problem if we want benchmarks to keep driving progress.

So Facebook is releasing a new kind of test that pits AIs against humans who do their best to trip them up. Called Dynabench, the test will be as hard as people choose to make it.

In recent years, researchers have been developing machine learning algorithms for an increasingly wide range of purposes. This includes algorithms that can be applied in healthcare settings, for instance helping clinicians to diagnose specific diseases or neuropsychiatric disorders or monitor the health of patients over time.

Researchers at Massachusetts Institute of Technology (MIT) and Massachusetts General Hospital have recently carried out a study investigating the possibility of using learning to control the levels of unconsciousness of patients who require anesthesia for a medical procedure. Their paper, set to be published in the proceedings of the 2020 International Conference on Artificial Intelligence in Medicine, was voted the best paper presented at the conference.

“Our lab has made significant progress in understanding how anesthetic medications affect and now has a multidisciplinary team studying how to accurately determine anesthetic doses from neural recordings,” Gabriel Schamberg, one of the researchers who carried out the study, told TechXplore. “In our recent study, we trained a using the cross-entropy method, by repeatedly letting it run on simulated patients and encouraging actions that led to good outcomes.”

Of all the AI models in the world, OpenAI’s GPT-3 has most captured the public’s imagination. It can spew poems, short stories, and songs with little prompting, and has been demonstrated to fool people into thinking its outputs were written by a human. But its eloquence is more of a parlor trick, not to be confused with realintelligence.

Nonetheless, researchers believe that the techniques used to create GPT-3 could contain the secret to more advanced AI. GPT-3 trained on an enormous amount of text data. What if the same methods were trained on both text and images?

Now new research from the Allen Institute for Artificial Intelligence, AI2, has taken this idea to the next level. The researchers have developed a new text-and-image model, otherwise known as a visual-language model, that can generate images given a caption. The images look unsettling and freakish—nothing like the hyperrealistic deepfakes generated by GANs —but they might demonstrate a promising new direction for achieving more generalizable intelligence, and perhaps smarter robots as well.