Toggle light / dark theme

“That’s got molecules in it that will prevent cancer, among other things” like anti-inflammatory properties, he said. Some older research has shown, for example, that green tea consumption might be linked to a lower risk of stomach cancer.

Sinclair also said he takes supplements (like those sold on the Tally Health website) that contain resveratrol, which his team’s research has shown can extend the lifespan of organisms like yeast and worms.

While the compound, famously found in red wine, is known to have anti-inflammatory, anti-cancer, heart health, and brain health benefits, the research is mixed on if or how well such benefits can be achieved in humans through a pill.

These days, we don’t have to wait long until the next breakthrough in artificial intelligence impresses everyone with capabilities that previously belonged only in science fiction.

In 2022, AI art generation tools such as Open AI’s DALL-E 2, Google’s Imagen, and Stable Diffusion took the internet by storm, with users generating high-quality images from text descriptions.

Unlike previous developments, these text-to-image tools quickly found their way from research labs to mainstream culture, leading to viral phenomena such as the “Magic Avatar” feature in the Lensa AI app, which creates stylized images of its users.

Microsoft’s Kosmos-1 can take image and audio prompts, paving the way for the next stage beyond ChatGPT’s text prompts.

Microsoft has unveiled Kosmos-1, which it describes as a multimodal large language model (MLLM) that can not only respond to language prompts but also visual cues, which can be used for an array of tasks, including image captioning, visual question answering, and more.

OpenAI’s ChatGPT has helped popularize the concept of LLMs, such as the GPT (Generative Pre-trained Transformer) model, and the possibility of transforming a text prompt or input into an output.

The bank has been testing the artificial intelligence tool with 300 advisors and plans to roll it out widely in the coming months, according to Jeff McMillan, head of analytics, data and innovation at the firm’s wealth management division.

Morgan Stanley’s move is one of the first announcements by a financial incumbent after the success of OpenAI’s ChatGPT, which went viral late last year by generating human-sounding responses to questions. The bank is a juggernaut in wealth management with more than $4.2 trillion in client assets. The promise and perils of artificial intelligence have been written about for years, but seemingly only after ChatGPT did mainstream users understand the ramifications of the technology.

The idea behind the tool, which has been in development for the past year, is to help the bank’s 16,000 or so advisors tap the bank’s enormous repository of research and data, said McMillan.

Deep Learning (DL) advances have cleared the way for intriguing new applications and are influencing the future of Artificial Intelligence (AI) technology. However, a typical concern for DL models is their explainability, as experts commonly agree that Neural Networks (NNs) function as black boxes. We do not precisely know what happens inside, but we know that the given input is somehow processed, and as a result, we obtain something as output. For this reason, DL models can often be difficult to understand or interpret. Understanding why a model makes certain predictions or how to improve it can be challenging.

This article will introduce and emphasize the importance of NN explainability, provide insights into how to achieve it, and suggest tools that could improve your DL model’s performance.