Toggle light / dark theme

Ex-OpenAI employees launch new AI chatbot Claude to compete with ChatGPT

By Ankita Chakravarti: ChatGPT, which is the fastest growing app in the world, has competition now. After Microsoft’ Bing and Google’s Bard AI, Anthropic, which was founded by former OpenAI employees, has launched a new AI chatbot to rival ChatGPT. The company claims that Claude is “easier to converse with” “more steerable.” and “much less likely to produce harmful outputs,”

Claude performs pretty well and has the same functions as the ChatGPT. “Claude can help with use cases including summarization, search, creative and collaborative writing, Q&A, coding, and more. Early customers report that Claude is much less likely to produce harmful outputs, easier to converse with, and more steerable — so you can get your desired output with less effort. Claude can also take direction on personality, tone, and behavior,” the company said in a blog post.

Anthrophic is offering Claude in two different variants including the Claude and Claude Instant. The company explains that Claude is a “state-of-the-art high-performance model”, while Claude Instant is a “lighter, less expensive, and much faster option.” “We plan to introduce even more updates in the coming weeks. As we develop these systems, we’ll continually work to make them more helpful, honest, and harmless as we learn more from our safety research and our deployments,” the blog read.

Karl Friston — World Renowned Researcher — Joins Verses Technologies as Chief Scientist

He was ranked the number 1 most influential neuroscientist in the world by Semantic Scholar in 2016, and has received numerous awards and accolades for his work. His appointment as chief scientist of Verses not only validates their platform’s framework for advancing AI implementations but also highlights the company’s commitment to expanding the frontier of AI research and development.

Friston is short listed for a Nobel Prize, is one of the most cited scientists in human history with over 260,000 academic citations, and invented all of the mathematics behind the fMRI scan. As one pundit put it, “what Einstein was to physics, Friston is to Intelligence.”

Indeed Friston’s expertise will be invaluable in helping the company execute its vision of deploying a plethora of technologies working toward a smarter world through AI.

Researchers From Stanford And DeepMind Come Up With The Idea of Using Large Language Models LLMs as a Proxy Reward Function

With the development of computing and data, autonomous agents are gaining power. The need for humans to have some say over the policies learned by agents and to check that they align with their goals becomes all the more apparent in light of this.

Currently, users either 1) create reward functions for desired actions or 2) provide extensive labeled data. Both strategies present difficulties and are unlikely to be implemented in practice. Agents are vulnerable to reward hacking, making it challenging to design reward functions that strike a balance between competing goals. Yet, a reward function can be learned from annotated examples. However, enormous amounts of labeled data are needed to capture the subtleties of individual users’ tastes and objectives, which has proven expensive. Furthermore, reward functions must be redesigned, or the dataset should be re-collected for a new user population with different goals.

New research by Stanford University and DeepMind aims to design a system that makes it simpler for users to share their preferences, with an interface that is more natural than writing a reward function and a cost-effective approach to define those preferences using only a few instances. Their work uses large language models (LLMs) that have been trained on massive amounts of text data from the internet and have proven adept at learning in context with no or very few training examples. According to the researchers, LLMs are excellent contextual learners because they have been trained on a large enough dataset to incorporate important commonsense priors about human behavior.

AI Might Be Seemingly Everywhere, but There Are Still Plenty of Things It Can’t Do—For Now

These days, we don’t have to wait long until the next breakthrough in artificial intelligence impresses everyone with capabilities that previously belonged only in science fiction.

In 2022, AI art generation tools such as Open AI’s DALL-E 2, Google’s Imagen, and Stable Diffusion took the internet by storm, with users generating high-quality images from text descriptions.

Unlike previous developments, these text-to-image tools quickly found their way from research labs to mainstream culture, leading to viral phenomena such as the “Magic Avatar” feature in the Lensa AI app, which creates stylized images of its users.

Now Microsoft has a new AI model

Microsoft’s Kosmos-1 can take image and audio prompts, paving the way for the next stage beyond ChatGPT’s text prompts.

Microsoft has unveiled Kosmos-1, which it describes as a multimodal large language model (MLLM) that can not only respond to language prompts but also visual cues, which can be used for an array of tasks, including image captioning, visual question answering, and more.

OpenAI’s ChatGPT has helped popularize the concept of LLMs, such as the GPT (Generative Pre-trained Transformer) model, and the possibility of transforming a text prompt or input into an output.

Morgan Stanley is testing an OpenAI-powered chatbot for its 16,000 financial advisors

The bank has been testing the artificial intelligence tool with 300 advisors and plans to roll it out widely in the coming months, according to Jeff McMillan, head of analytics, data and innovation at the firm’s wealth management division.

Morgan Stanley’s move is one of the first announcements by a financial incumbent after the success of OpenAI’s ChatGPT, which went viral late last year by generating human-sounding responses to questions. The bank is a juggernaut in wealth management with more than $4.2 trillion in client assets. The promise and perils of artificial intelligence have been written about for years, but seemingly only after ChatGPT did mainstream users understand the ramifications of the technology.

The idea behind the tool, which has been in development for the past year, is to help the bank’s 16,000 or so advisors tap the bank’s enormous repository of research and data, said McMillan.

Unlocking the Secrets of Deep Learning with Tensorleap’s Explainability Platform

Deep Learning (DL) advances have cleared the way for intriguing new applications and are influencing the future of Artificial Intelligence (AI) technology. However, a typical concern for DL models is their explainability, as experts commonly agree that Neural Networks (NNs) function as black boxes. We do not precisely know what happens inside, but we know that the given input is somehow processed, and as a result, we obtain something as output. For this reason, DL models can often be difficult to understand or interpret. Understanding why a model makes certain predictions or how to improve it can be challenging.

This article will introduce and emphasize the importance of NN explainability, provide insights into how to achieve it, and suggest tools that could improve your DL model’s performance.

/* */