Toggle light / dark theme

Big AI needs to overcome the high scaling costs of generative AI

Tech companies are investing billions in developing and deploying generative AI. That money needs to be recouped. Recent reports and analysis show that it’s not easy.

According to an anonymous source from the Wall Street Journal, Microsoft lost more than $20 per user per month on generative AI code Github Copilot in the first few months of the year. Some users reportedly cost as much as $80 per month. Microsoft charges $10 per user per month.

It loses money because the AI model that generates the code is expensive to run. Github Copilot is popular with developers and currently has about 1.5 million users who constantly trigger the model to write more code.

AI was told to design a robot that could walk. Within seconds, it generated a ‘small, squishy, and misshapen’ thing that spasms

When a group of researchers asked an AI to design a robot that could walk, it created a “small, squishy and misshapen” thing that walks by spasming when filled with air.

The researchers — affiliated with Northwestern University, MIT, and the University of Vermont — published their findings in an article for the Proceedings of the National Academy of Sciences on October 3.

“We told the AI that we wanted a robot that could walk across land. Then we simply pressed a button and presto!” Sam Kriegman, an assistant professor at Northwestern University and the lead researcher behind the study, wrote in a separate blog post.

Exponential AI Growth — Are we close? — Transhuman Podcast #3

We are arguably at “the knee” of the curve. More breakthroughs have happened in the first 9 months of 2023 than all previous years from the turn of the century (2001 — 2022).

Will AGI kill us all? Will we join with it? Is it even close? Is it just “cool stuff”? Will we have bootstrapping self-improving AI?

The podcast crew today:
On the panel (left to right): Stefan Van Der Wel, Oliver Engelmann, Brendan Clarke.
Host camera: Roy Sherfan, Simon Carter.
Off camera: Peter Xing.

How ChatGPT and other AI tools could disrupt scientific publishing

“It’s never really the goal of anybody to write papers — it’s to do science,” says Michael Eisen, a computational biologist at the University of California, Berkeley, who is also editor-in-chief of the journal eLife. He predicts that generative AI tools could even fundamentally transform the nature of the scientific paper.

But the spectre of inaccuracies and falsehoods threatens this vision. LLMs are merely engines for generating stylistically plausible output that fits the patterns of their inputs, rather than for producing accurate information. Publishers worry that a rise in their use might lead to greater numbers of poor-quality or error-strewn manuscripts — and possibly a flood of AI-assisted fakes.

“Anything disruptive like this can be quite worrying,” says Laura Feetham, who oversees peer review for IOP Publishing in Bristol, UK, which publishes physical-sciences journals.

Regulate AI Now

In the six months since FLI published its open letter calling for a pause on giant AI experiments, we have seen overwhelming expert and public concern about the out-of-control AI arms race — but no slowdown. In this video, we call for U.S. lawmakers to step in, and explore the policy solutions necessary to steer this powerful technology to benefit humanity.

BBC Will Block ChatGPT AI From Scraping Its Content

The BBC has blocked the artificial intelligence software behind ChatGPT from accessing or using its content.

The move aligns the BBC with Reuters, Getty Images and other content providers that have taken similar steps over copyright and privacy concerns. Artificial intelligence can repurpose content, creating new text, images and more from the data.

Rhodri Talfan Davies, director of nations at the BBC said the BBC was taking steps to safeguard the interests of licence fee payers as this new technology evolves.