Toggle light / dark theme

Emmett Shear Becomes Interim OpenAI CEO as Altman Talks Break Down

Breaking: sam altman will not return as CEO of openai.


Sam Altman won’t return as CEO of OpenAI, despite efforts by the company’s executives to bring him back, according to co-founder and board director Ilya Sutskever. After a weekend of negotiations with the board of directors that fired him Friday, as well as with its remaining leaders and top investors, Altman will not return to the startup he co-founded in 2015, Sutskever told staff. Emmett Shear, co-founder of Amazon-owned video streaming site Twitch, will take over as interim CEO, Sutskever said.

The decision—which flew in the face of comments OpenAI executives shared with staff on Saturday and early Sunday—could deepen a crisis precipitated by the board’s sudden ouster of Altman and its removal of President Greg Brockman from the board Friday. Brockman, a key engineer behind the company’s successes, resigned later that day, followed by three senior researchers, threatening to set off a broader wave of departures to OpenAI’s rivals, including Google, and to a new venture Altman has been plotting in the wake of his firing.

Distraught employees streamed out of OpenAI headquarters in San Francisco a little after 9 p.m., shortly after the decision was announced internally. One of the people who left the office appeared to be research chief Bob McGrew, who was one of the many executives working to bring Altman and Brockman back.

Building An Expert GPT in Physics-Informed Neural Networks, with GPTs

One of the most interesting releases in the recent OpenAI’s DevDay is the GPTs. Essentially, GPTs are custom versions of ChatGPT that anyone can create for specific purposes. The process of configuring a workable GPT involves no coding but purely through chatting. As a result, since the release, a diverse of GPTs have been created by the community to help users be more productive and create more fun in life.

As a practitioner in the domain of physics-informed neural networks (PINN), I use ChatGPT (GPT-4) a lot to help me understand complex technical concepts, debug issues encountered when implementing the model, and suggest novel research ideas or engineering solutions. Despite being quite useful, I often find ChatGPT struggles to give me tailored answers beyond its general knowledge of PINN. Although I can tweak my prompts to incorporate more contextual information, it is a rather time-consuming practice, and can quickly deplete my patience sometimes.

Now with the possibility of easily customizing ChatGPT, a thought occurred to me: why not develop a customized GPT that acts as a PINN expert 🦸‍♀️, draws knowledge from my curated sources, and strives to answer my queries about PINN in a tailored way?

Novel Modes of Neural Computation: From Nanowires to Mind

The human mind is by far one of the most amazing natural phenomena known to man. It embodies our perception of reality, and is in that respect the ultimate observer. The past century produced monumental discoveries regarding the nature of nerve cells, the anatomical connections between nerve cells, the electrophysiological properties of nerve cells, and the molecular biology of nervous tissue. What remains to be uncovered is that essential something – the fundamental dynamic mechanism by which all these well understood biophysical elements combine to form a mental state. In this chapter, we further develop the concept of an intraneuronal matrix as the basis for autonomous, self–organized neural computing, bearing in mind that at this stage such models are speculative. The intraneuronal matrix – composed of microtubules, actin filaments, and cross–linking, adaptor, and scaffolding proteins – is envisioned to be an intraneuronal computational network, which operates in conjunction with traditional neural membrane computational mechanisms to provide vastly enhanced computational power to individual neurons as well as to larger neural networks. Both classical and quantum mechanical physical principles may contribute to the ability of these matrices of cytoskeletal proteins to perform computations that regulate synaptic efficacy and neural response. A scientifically plausible route for controlling synaptic efficacy is through the regulation of neural transport of synaptic proteins and of mRNA. Operations within the matrix of cytoskeletal proteins that have applications to learning, memory, perception, and consciousness, and conceptual models implementing classical and quantum mechanical physics are discussed. Nanoneuroscience methods are emerging that are capable of testing aspects of these conceptual models, both theoretically and experimentally. Incorporating intra–neuronal biophysical operations into existing theoretical frameworks of single neuron and neural network function stands to enhance existing models of neurocognition.

Creating optical logic gates from graphene nanoribbons

Research into artificial intelligence (AI) network computing has made significant progress in recent years but has so far been held back by the limitations of logic gates in conventional computer chips. Through new research published in The European Physical Journal D, a team led by Aijin Zhu at Guilin University of Electronic Technology, China, introduced a graphene-based optical logic gate, which addresses many of these challenges.

The design could lead to a new generation of computer chips that consume less energy while reaching higher computing speeds and efficiencies. This could, in turn, pave the way for the use of AI in computer networks to automate tasks and improve decision-making—leading to enhanced performance, security, and functionality.

There are many advantages to microchips whose component logic gates exchange signals using light instead of electrical current. However, current designs are often bulky, somewhat unstable, and vulnerable to information loss.

The Real Reason Sam Altman was FIRED From OpenAI

OpenAI is in discussions with Sam Altman to return to the company as its CEO, according to multiple people familiar with the matter. One of them said Altman, who was suddenly fired by the board on Friday with no notice, is “ambivalent” about coming back and would want significant governance changes. There have been some leaks on the real reason Sam Altman was fired from OpenAI’s largest investor, Microsoft, said in a statement shortly after Altman’s firing that the company “remains committed” to its partnership with the AI firm. However, OpenAI’s investors weren’t given advance warning or opportunity to weigh in on the board’s decision to remove Altman. As the face of the company and the most prominent voice in AI, his removal throws the future of OpenAI into uncertainty at a time when rivals are racing to catch up with the unprecedented rise of ChatGPT.

TIMESTAMPS:
00:00 Vibes were off at OpenAI — Jimmy Apples.
00:55 Why Sam Altman was fired.
03:15 The Future of OpenAI and Sam Altman.

#openai #samaltman #microsoft

Hypotheses devised by AI could find ‘blind spots’ in research

In early October, as the Nobel Foundation announced the recipients of this year’s Nobel prizes, a group of researchers, including a previous laureate, met in Stockholm to discuss how artificial intelligence (AI) might have an increasingly creative role in the scientific process. The workshop, led in part by Hiroaki Kitano, a biologist and chief executive of Sony AI in Tokyo, considered creating prizes for AIs and AI–human collaborations that produce world-class science. Two years earlier, Kitano proposed the Nobel Turing Challenge1: the creation of highly autonomous systems (‘AI scientists’) with the potential to make Nobel-worthy discoveries by 2050.

It’s easy to imagine that AI could perform some of the necessary steps in scientific discovery. Researchers already use it to search the literature, automate data collection, run statistical analyses and even draft parts of papers. Generating hypotheses — a task that typically requires a creative spark to ask interesting and important questions — poses a more complex challenge. For Sendhil Mullainathan, an economist at the University of Chicago Booth School of Business in Illinois, “it’s probably been the single most exhilarating kind of research I’ve ever done in my life”

Elon Musk says the risk of advanced AI is so high that the public needs to know why OpenAI fired Sam Altman

Elon Musk said that the potential danger of AI is so great that OpenAI, the most powerful artificial intelligence company in the world right now, should disclose the reason it fired CEO Sam Altman. OpenAI announced Altman’s firing on Friday, saying only that the company, which makes ChatGPT, “no longer has confidence in his ability to continue leading.”

Musk, responding to a post on X from former Yammer CEO David Sacks, said that “given the risk and power of advanced AI, the public should be informed of why the board felt they had to take such a drastic decision.”