Toggle light / dark theme

OpenAI releases GPT-4, a multimodal AI that it claims is state-of-the-art

OpenAI has released a powerful new image-and text-understanding AI model, GPT-4, that the company calls “the latest milestone in its effort in scaling up deep learning.”

GPT-4 is available today to OpenAI’s paying users via ChatGPT Plus (with a usage cap), and developers can sign up on a waitlist to access the API.

Pricing is $0.03 per 1,000 “prompt” tokens (about 750 words) and $0.06 per 1,000 “completion” tokens (again, about 750 words). Tokens represent raw text; for example, the word “fantastic” would be split into the tokens “fan,” “tas” and “tic.” Prompt tokens are the parts of words fed into GPT-4 while completion tokens are the content generated by GPT-4.

Microsoft lays off an ethical AI team as it doubles down on OpenAI

Microsoft laid off an entire team dedicated to guiding AI innovation that leads to ethical, responsible and sustainable outcomes. The cutting of the ethics and society team, as reported by Platformer, is part of a recent spate of layoffs that affected 10,000 employees across the company.

The elimination of the team comes as Microsoft invests billions more dollars into its partnership with OpenAI, the startup behind art-and text-generating AI systems like ChatGPT and DALL-E 2, and revamps its Bing search engine and Edge web browser to be powered by a new, next-generation large language model that is “more powerful than ChatGPT and customized specifically for search.”

The move calls into question Microsoft’s commitment to ensuring its product design and AI principles are closely intertwined at a time when the company is making its controversial AI tools available to the mainstream.

Anthropic launches Claude, a chatbot to rival OpenAI’s ChatGPT

Anthropic, a startup co-founded by ex-OpenAI employees, today launched something of a rival to the viral sensation ChatGPT.

Called Claude, Anthropic’s AI — a chatbot — can be instructed to perform a range of tasks, including searching across documents, summarizing, writing and coding, and answering questions about particular topics. In these ways, it’s similar to OpenAI’s ChatGPT. But Anthropic makes the case that Claude is “much less likely to produce harmful outputs,” “easier to converse with” and “more steerable.”

Organizations can request access. Pricing has yet to be detailed.

Exploring The Ins And Outs Of The Generative AI Boom

AI or bust. Right now, AI is what everyone is talking about, and for good reason. After years of seeing AI doled out to help automate the processes that make businesses run smarter, we’re finally seeing AI that can help the average business employee working in the real world. Generative AI, or the process of using algorithms to produce data often in the form of images or text, has exploded in the last few months. What started with OpenAI’s ChatGPT has bloomed into a rapidly evolving subcategory of technology. And companies from Microsoft to Google to Salesforce and Adobe are hopping on board.


What started with ChatGPT has bloomed into an entire subcategory of technology with Meta, AWS, Salesforce, Google, Microsoft all racing to out innovate and deliver exciting generative AI capabilities to consumers, enterprise, developers, and more. Exploring the rapid progress in the AI space.

Microsoft spent millions to put together a supercomputer for OpenAI

Now its building one that even bigger and even more sophisticated.

Nearly five years ago, a little-known company approached Microsoft with a special request to put together computing horsepower to the scale it had never done before. Microsoft then spent millions of dollars in putting together tens of thousands of powerful chips to build a supercomputer. OpenAI used this to train its large language model, GPT, and the rest, as they say, is history.

Microsoft is no stranger to building artificial intelligence (AI) models that help users work more efficiently. The automatic spell checker that has helped millions of users is an example of an AI model trained on language.


Microsoft.

How Microsoft put together a supercomputer for OpenAI.

Superhuman artificial intelligence can improve human decision-making

How will superhuman artificial intelligence (AI) affect human decision-making? And what will be the mechanisms behind this effect? We address these questions in a domain where AI already exceeds human performance, analyzing more than 5.8 million move decisions made by professional Go players over the past 71 y (1950 to 2021). To address the first question, we use a superhuman AI program to estimate the quality of human decisions across time, generating 58 billion counterfactual game patterns and comparing the win rates of actual human decisions with those of counterfactual AI decisions. We find that humans began to make significantly better decisions following the advent of superhuman AI. We then examine human players’ strategies across time and find that novel decisions (i.e., previously unobserved moves) occurred more frequently and became associated with higher decision quality after the advent of superhuman AI. Our findings suggest that the development of superhuman AI programs may have prompted human players to break away from traditional strategies and induced them to explore novel moves, which in turn may have improved their decision-making.

Synaptic Wiring Map for Whole Insect Brain Completed

The fruit fly larva connectome showed circuit features that were strikingly reminiscent of prominent and powerful machine learning architectures. “Some of the architectural features observed in the Drosophila larval brain, including multilayer shortcuts and prominent nested recurrent loops, are found in state-of-the-art artificial neural networks, where they can compensate for a lack of network depth and support arbitrary, task-dependent computations,” they wrote. The team expects continued study will reveal even more computational principles and potentially inspire new artificial intelligence systems. “What we learned about code for fruit flies will have implications for the code for humans,” Vogelstein said. “That’s what we want to understand—how to write a program that leads to a human brain network.”

/* */