Toggle light / dark theme

Sam Altman, OpenAI Board Open Talks to Negotiate His Possible Return

Sam Altman and the OpenAI board are now in talks for his possible return, specifically he’s speaking with Adam d’Angelo.


Sam Altman and members of the OpenAI board have opened negotiations aimed at a possible return of the ousted co-founder and chief executive officer to the artificial intelligence company, according to people with knowledge of the matter.

Discussions are happening between Altman and at least one board member, Adam D’Angelo, said the people, who asked not to be identified because the deliberations are private and they may not come to fruition. The talks also involve some of OpenAI’s investors, many of whom are pushing for his reinstatement, one of the people said.

In one scenario being discussed, Altman would return as a director on a transitional board, one of the people said. Former Salesforce Inc. co-CEO Bret Taylor could also serve as a director on a new board, multiple people said.

Researchers seek consensus on what constitutes Artificial General Intelligence

A team of researchers at DeepMind focusing on the next frontier of artificial intelligence—Artificial General Intelligence (AGI)—realized they needed to resolve one key issue first. What exactly, they asked, is AGI?

It is often viewed in general as a type of artificial intelligence that possesses the ability to understand, learn and apply knowledge across a broad range of tasks, operating like the . Wikipedia broadens the scope by suggesting AGI is “a hypothetical type of intelligent agent [that] could learn to accomplish any intellectual task that human beings or animals can perform.”

OpenAI’s charter describes AGI as a set of “highly autonomous systems that outperform humans at most economically valuable work.”

New research maps 14 potential evolutionary dead ends for humanity and ways to avoid them

Humankind on the verge of evolutionary traps, a new study: …For the first time, scientists have used the concept of evolutionary traps on human societies at large.


For the first time, scientists have used the concept of evolutionary traps on human societies at large. They find that humankind risks getting stuck in 14 evolutionary dead ends, ranging from global climate tipping points to misaligned artificial intelligence, chemical pollution, and accelerating infectious diseases.

The evolution of humankind has been an extraordinary success story. But the Anthropocene—the proposed geological epoch shaped by us humans—is showing more and more cracks. Multiple global crises, such as the COVID-19 pandemic, , , financial crises, and conflicts have started to occur simultaneously in something which scientists refer to as a polycrisis.

Humans are incredibly creative as a species. We are able to innovate and adapt to many circumstances and can cooperate on surprisingly large scales. But these capabilities turn out to have unintentional consequences. Simply speaking, you could say that the human species has been too successful and, in some ways, too smart for its own future good, says Peter Søgaard Jørgensen, researcher at the Stockholm Resilience Center at Stockholm University and at the Royal Swedish Academy of Sciences’ Global Economic Dynamics and the Biosphere program and Anthropocene laboratory.

Generative AI startup AI21 Labs raises cash in the midst of OpenAI chaos

One AI startup’s undoing is another’s opportunity.

Case in point: Today, AI21 Labs, a company developing generative AI products along the lines of OpenAI’s GPT-4 and ChatGPT, closed a $53 million extension to its previously announced Series C funding round. The new tranche, which had participation from new investors Intel Capital and Comcast Ventures, brings AI21’s total raised to $336 million.

The startup’s valuation remains unchanged at $1.4 billion.

2024: The Year Microsoft’s AI-Driven Zero Trust Vision Delivers

Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.

Microsoft’s vision for zero trust security is galvanized around generative AI and reflects how identity and network access must constantly improve to counter complex cyberattacks.

Their many security announcements at Ignite 2023 reflect how they’re architecting the future of zero trust with greater adaptability and contextual intelligence designed in. The Microsoft Ignite 2023 Book of News overviews the new products announced this week at the event.

Humans Make Better Cancer Treatment Decisions Than AI, Study Finds

Treating cancer is becoming increasingly complex, but also offers more and more possibilities. After all, the better a tumor’s biology and genetic features are understood, the more treatment approaches there are. To be able to offer patients personalized therapies tailored to their disease, laborious and time-consuming analysis and interpretation of various data is required. Researchers at Charité – Universitätsmedizin Berlin and Humboldt-Universität zu Berlin have now studied whether generative artificial intelligence (AI) tools such as ChatGPT can help with this step. This is one of many projects at Charité analyzing the opportunities unlocked by AI in patient care.

If the body can no longer repair certain genetic mutations itself, cells begin to grow unchecked, producing a tumor. The crucial factor in this phenomenon is an imbalance of growth-inducing and growth-inhibiting factors, which can result from changes in oncogenes – genes with the potential to cause cancer – for example. Precision oncology, a specialized field of personalized medicine, leverages this knowledge by using specific treatments such as low-molecular weight inhibitors and antibodies to target and disable hyperactive oncogenes.

OpenAI Employees Say Firm’s Chief Scientist Has Been Making Strange Spiritual Claims

Why does it feel like everybody at OpenAI has lost their mind?

In what’s arguably turning into the hottest AI story of the year, former OpenAI CEO Sam Altman was ousted by the rest of the company’s nonprofit board on Friday, leading to a seemingly endless drama cycle that’s included hundreds of staffers threatening to quit en masse if the board doesn’t reinstate him.

A key character in the spectacle has been OpenAI chief scientist and board member Ilya Sutskever — who, according to The Atlantic, likes to burn effigies and lead ritualistic chants at the company — and appears to have been one of the main drivers behind Altman’s ousting.

What OpenAI shares with Scientology

Ms. McCauley and Ms. Toner [HF — two board members] have ties to the Rationalist and Effective Altruist movements, a community that is deeply concerned that A.I. could one day destroy humanity. Today’s A.I. technology cannot destroy humanity. But this community believes that as the technology grows increasingly powerful, these dangers will arise.

McCauley and Toner reportedly worried that Altman was pushing too hard, too quickly for new and potentially dangerous forms of AI (similar fears led some OpenAI people to bail out and found a competitor, Anthropic, a couple of years ago). The FT’s reporting confirms that the fight was over how quickly to commercialize AI

The back-story to all of this is actually much weirder than the average sex scandal. The field of AI (in particular, its debates around Large Language Models (LLMs) like OpenAI’s GPT-4) is profoundly shaped by cultish debates among people with some very strange beliefs.