Toggle light / dark theme

Morgan Stanley is testing an OpenAI-powered chatbot for its 16,000 financial advisors

The bank has been testing the artificial intelligence tool with 300 advisors and plans to roll it out widely in the coming months, according to Jeff McMillan, head of analytics, data and innovation at the firm’s wealth management division.

Morgan Stanley’s move is one of the first announcements by a financial incumbent after the success of OpenAI’s ChatGPT, which went viral late last year by generating human-sounding responses to questions. The bank is a juggernaut in wealth management with more than $4.2 trillion in client assets. The promise and perils of artificial intelligence have been written about for years, but seemingly only after ChatGPT did mainstream users understand the ramifications of the technology.

The idea behind the tool, which has been in development for the past year, is to help the bank’s 16,000 or so advisors tap the bank’s enormous repository of research and data, said McMillan.

Unlocking the Secrets of Deep Learning with Tensorleap’s Explainability Platform

Deep Learning (DL) advances have cleared the way for intriguing new applications and are influencing the future of Artificial Intelligence (AI) technology. However, a typical concern for DL models is their explainability, as experts commonly agree that Neural Networks (NNs) function as black boxes. We do not precisely know what happens inside, but we know that the given input is somehow processed, and as a result, we obtain something as output. For this reason, DL models can often be difficult to understand or interpret. Understanding why a model makes certain predictions or how to improve it can be challenging.

This article will introduce and emphasize the importance of NN explainability, provide insights into how to achieve it, and suggest tools that could improve your DL model’s performance.

GPT-4

Its Up!


We’ve created GPT-4, the latest milestone in OpenAI’s effort in scaling up deep learning. GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks. For example, it passes a simulated bar exam with a score around the top 10% of test takers; in contrast, GPT-3.5’s score was around the bottom 10%. We’ve spent 6 months iteratively aligning GPT-4 using lessons from our adversarial testing program as well as ChatGPT, resulting in our best-ever results (though far from perfect) on factuality, steerability, and refusing to go outside of guardrails.

Over the past two years, we rebuilt our entire deep learning stack and, together with Azure, co-designed a supercomputer from the ground up for our workload. A year ago, we trained GPT-3.5 as a first “test run” of the system. We found and fixed some bugs and improved our theoretical foundations. As a result, our GPT-4 training run was (for us at least!) unprecedentedly stable, becoming our first large model whose training performance we were able to accurately predict ahead of time. As we continue to focus on reliable scaling, we aim to hone our methodology to help us predict and prepare for future capabilities increasingly far in advance—something we view as critical for safety.

We are releasing GPT-4’s text input capability via ChatGPT and the API (with a waitlist). To prepare the image input capability for wider availability, we’re collaborating closely with a single partner to start. We’re also open-sourcing OpenAI Evals, our framework for automated evaluation of AI model performance, to allow anyone to report shortcomings in our models to help guide further improvements.

Google announces AI features in Gmail, Docs, and more to rival Microsoft

Google will soon offer ways to generate text and images using machine learning in its Workspace products as part of a scramble to catch up with rivals in the new AI race.

Google has announced a suite of upcoming generative AI features for its various Workspace apps, including Google Docs, Gmail, Sheets, and Slides.


Google is pumping its productivity apps full of AI.

OpenAI releases GPT-4, a multimodal AI that it claims is state-of-the-art

OpenAI has released a powerful new image-and text-understanding AI model, GPT-4, that the company calls “the latest milestone in its effort in scaling up deep learning.”

GPT-4 is available today to OpenAI’s paying users via ChatGPT Plus (with a usage cap), and developers can sign up on a waitlist to access the API.

Pricing is $0.03 per 1,000 “prompt” tokens (about 750 words) and $0.06 per 1,000 “completion” tokens (again, about 750 words). Tokens represent raw text; for example, the word “fantastic” would be split into the tokens “fan,” “tas” and “tic.” Prompt tokens are the parts of words fed into GPT-4 while completion tokens are the content generated by GPT-4.

Microsoft lays off an ethical AI team as it doubles down on OpenAI

Microsoft laid off an entire team dedicated to guiding AI innovation that leads to ethical, responsible and sustainable outcomes. The cutting of the ethics and society team, as reported by Platformer, is part of a recent spate of layoffs that affected 10,000 employees across the company.

The elimination of the team comes as Microsoft invests billions more dollars into its partnership with OpenAI, the startup behind art-and text-generating AI systems like ChatGPT and DALL-E 2, and revamps its Bing search engine and Edge web browser to be powered by a new, next-generation large language model that is “more powerful than ChatGPT and customized specifically for search.”

The move calls into question Microsoft’s commitment to ensuring its product design and AI principles are closely intertwined at a time when the company is making its controversial AI tools available to the mainstream.

Anthropic launches Claude, a chatbot to rival OpenAI’s ChatGPT

Anthropic, a startup co-founded by ex-OpenAI employees, today launched something of a rival to the viral sensation ChatGPT.

Called Claude, Anthropic’s AI — a chatbot — can be instructed to perform a range of tasks, including searching across documents, summarizing, writing and coding, and answering questions about particular topics. In these ways, it’s similar to OpenAI’s ChatGPT. But Anthropic makes the case that Claude is “much less likely to produce harmful outputs,” “easier to converse with” and “more steerable.”

Organizations can request access. Pricing has yet to be detailed.