Toggle light / dark theme

He was ranked the number 1 most influential neuroscientist in the world by Semantic Scholar in 2016, and has received numerous awards and accolades for his work. His appointment as chief scientist of Verses not only validates their platform’s framework for advancing AI implementations but also highlights the company’s commitment to expanding the frontier of AI research and development.

Friston is short listed for a Nobel Prize, is one of the most cited scientists in human history with over 260,000 academic citations, and invented all of the mathematics behind the fMRI scan. As one pundit put it, “what Einstein was to physics, Friston is to Intelligence.”

Indeed Friston’s expertise will be invaluable in helping the company execute its vision of deploying a plethora of technologies working toward a smarter world through AI.

With the development of computing and data, autonomous agents are gaining power. The need for humans to have some say over the policies learned by agents and to check that they align with their goals becomes all the more apparent in light of this.

Currently, users either 1) create reward functions for desired actions or 2) provide extensive labeled data. Both strategies present difficulties and are unlikely to be implemented in practice. Agents are vulnerable to reward hacking, making it challenging to design reward functions that strike a balance between competing goals. Yet, a reward function can be learned from annotated examples. However, enormous amounts of labeled data are needed to capture the subtleties of individual users’ tastes and objectives, which has proven expensive. Furthermore, reward functions must be redesigned, or the dataset should be re-collected for a new user population with different goals.

New research by Stanford University and DeepMind aims to design a system that makes it simpler for users to share their preferences, with an interface that is more natural than writing a reward function and a cost-effective approach to define those preferences using only a few instances. Their work uses large language models (LLMs) that have been trained on massive amounts of text data from the internet and have proven adept at learning in context with no or very few training examples. According to the researchers, LLMs are excellent contextual learners because they have been trained on a large enough dataset to incorporate important commonsense priors about human behavior.

“That’s got molecules in it that will prevent cancer, among other things” like anti-inflammatory properties, he said. Some older research has shown, for example, that green tea consumption might be linked to a lower risk of stomach cancer.

Sinclair also said he takes supplements (like those sold on the Tally Health website) that contain resveratrol, which his team’s research has shown can extend the lifespan of organisms like yeast and worms.

While the compound, famously found in red wine, is known to have anti-inflammatory, anti-cancer, heart health, and brain health benefits, the research is mixed on if or how well such benefits can be achieved in humans through a pill.

These days, we don’t have to wait long until the next breakthrough in artificial intelligence impresses everyone with capabilities that previously belonged only in science fiction.

In 2022, AI art generation tools such as Open AI’s DALL-E 2, Google’s Imagen, and Stable Diffusion took the internet by storm, with users generating high-quality images from text descriptions.

Unlike previous developments, these text-to-image tools quickly found their way from research labs to mainstream culture, leading to viral phenomena such as the “Magic Avatar” feature in the Lensa AI app, which creates stylized images of its users.

Microsoft’s Kosmos-1 can take image and audio prompts, paving the way for the next stage beyond ChatGPT’s text prompts.

Microsoft has unveiled Kosmos-1, which it describes as a multimodal large language model (MLLM) that can not only respond to language prompts but also visual cues, which can be used for an array of tasks, including image captioning, visual question answering, and more.

OpenAI’s ChatGPT has helped popularize the concept of LLMs, such as the GPT (Generative Pre-trained Transformer) model, and the possibility of transforming a text prompt or input into an output.