Toggle light / dark theme

Building An Expert GPT in Physics-Informed Neural Networks, with GPTs

One of the most interesting releases in the recent OpenAI’s DevDay is the GPTs. Essentially, GPTs are custom versions of ChatGPT that anyone can create for specific purposes. The process of configuring a workable GPT involves no coding but purely through chatting. As a result, since the release, a diverse of GPTs have been created by the community to help users be more productive and create more fun in life.

As a practitioner in the domain of physics-informed neural networks (PINN), I use ChatGPT (GPT-4) a lot to help me understand complex technical concepts, debug issues encountered when implementing the model, and suggest novel research ideas or engineering solutions. Despite being quite useful, I often find ChatGPT struggles to give me tailored answers beyond its general knowledge of PINN. Although I can tweak my prompts to incorporate more contextual information, it is a rather time-consuming practice, and can quickly deplete my patience sometimes.

Now with the possibility of easily customizing ChatGPT, a thought occurred to me: why not develop a customized GPT that acts as a PINN expert 🦸‍♀️, draws knowledge from my curated sources, and strives to answer my queries about PINN in a tailored way?

Novel Modes of Neural Computation: From Nanowires to Mind

The human mind is by far one of the most amazing natural phenomena known to man. It embodies our perception of reality, and is in that respect the ultimate observer. The past century produced monumental discoveries regarding the nature of nerve cells, the anatomical connections between nerve cells, the electrophysiological properties of nerve cells, and the molecular biology of nervous tissue. What remains to be uncovered is that essential something – the fundamental dynamic mechanism by which all these well understood biophysical elements combine to form a mental state. In this chapter, we further develop the concept of an intraneuronal matrix as the basis for autonomous, self–organized neural computing, bearing in mind that at this stage such models are speculative. The intraneuronal matrix – composed of microtubules, actin filaments, and cross–linking, adaptor, and scaffolding proteins – is envisioned to be an intraneuronal computational network, which operates in conjunction with traditional neural membrane computational mechanisms to provide vastly enhanced computational power to individual neurons as well as to larger neural networks. Both classical and quantum mechanical physical principles may contribute to the ability of these matrices of cytoskeletal proteins to perform computations that regulate synaptic efficacy and neural response. A scientifically plausible route for controlling synaptic efficacy is through the regulation of neural transport of synaptic proteins and of mRNA. Operations within the matrix of cytoskeletal proteins that have applications to learning, memory, perception, and consciousness, and conceptual models implementing classical and quantum mechanical physics are discussed. Nanoneuroscience methods are emerging that are capable of testing aspects of these conceptual models, both theoretically and experimentally. Incorporating intra–neuronal biophysical operations into existing theoretical frameworks of single neuron and neural network function stands to enhance existing models of neurocognition.

Creating optical logic gates from graphene nanoribbons

Research into artificial intelligence (AI) network computing has made significant progress in recent years but has so far been held back by the limitations of logic gates in conventional computer chips. Through new research published in The European Physical Journal D, a team led by Aijin Zhu at Guilin University of Electronic Technology, China, introduced a graphene-based optical logic gate, which addresses many of these challenges.

The design could lead to a new generation of computer chips that consume less energy while reaching higher computing speeds and efficiencies. This could, in turn, pave the way for the use of AI in computer networks to automate tasks and improve decision-making—leading to enhanced performance, security, and functionality.

There are many advantages to microchips whose component logic gates exchange signals using light instead of electrical current. However, current designs are often bulky, somewhat unstable, and vulnerable to information loss.

The Real Reason Sam Altman was FIRED From OpenAI

OpenAI is in discussions with Sam Altman to return to the company as its CEO, according to multiple people familiar with the matter. One of them said Altman, who was suddenly fired by the board on Friday with no notice, is “ambivalent” about coming back and would want significant governance changes. There have been some leaks on the real reason Sam Altman was fired from OpenAI’s largest investor, Microsoft, said in a statement shortly after Altman’s firing that the company “remains committed” to its partnership with the AI firm. However, OpenAI’s investors weren’t given advance warning or opportunity to weigh in on the board’s decision to remove Altman. As the face of the company and the most prominent voice in AI, his removal throws the future of OpenAI into uncertainty at a time when rivals are racing to catch up with the unprecedented rise of ChatGPT.

TIMESTAMPS:
00:00 Vibes were off at OpenAI — Jimmy Apples.
00:55 Why Sam Altman was fired.
03:15 The Future of OpenAI and Sam Altman.

#openai #samaltman #microsoft

Hypotheses devised by AI could find ‘blind spots’ in research

In early October, as the Nobel Foundation announced the recipients of this year’s Nobel prizes, a group of researchers, including a previous laureate, met in Stockholm to discuss how artificial intelligence (AI) might have an increasingly creative role in the scientific process. The workshop, led in part by Hiroaki Kitano, a biologist and chief executive of Sony AI in Tokyo, considered creating prizes for AIs and AI–human collaborations that produce world-class science. Two years earlier, Kitano proposed the Nobel Turing Challenge1: the creation of highly autonomous systems (‘AI scientists’) with the potential to make Nobel-worthy discoveries by 2050.

It’s easy to imagine that AI could perform some of the necessary steps in scientific discovery. Researchers already use it to search the literature, automate data collection, run statistical analyses and even draft parts of papers. Generating hypotheses — a task that typically requires a creative spark to ask interesting and important questions — poses a more complex challenge. For Sendhil Mullainathan, an economist at the University of Chicago Booth School of Business in Illinois, “it’s probably been the single most exhilarating kind of research I’ve ever done in my life”

Elon Musk says the risk of advanced AI is so high that the public needs to know why OpenAI fired Sam Altman

Elon Musk said that the potential danger of AI is so great that OpenAI, the most powerful artificial intelligence company in the world right now, should disclose the reason it fired CEO Sam Altman. OpenAI announced Altman’s firing on Friday, saying only that the company, which makes ChatGPT, “no longer has confidence in his ability to continue leading.”

Musk, responding to a post on X from former Yammer CEO David Sacks, said that “given the risk and power of advanced AI, the public should be informed of why the board felt they had to take such a drastic decision.”

Sutro introduces AI-powered app creation with no coding required

AI is already transforming the way we search, gather information, create, code, decipher data and more, and now it may democratize the process of building an app, too. A new AI-powered startup called Sutro promises the ability to build entire production-ready apps — including those for web, iOS and Android — in a matter of minutes, with no coding experience required.

The idea is to allow founders to focus on their unique ideas by leaning on Sutro to automate other aspects of app building, including the necessary AI expertise, product management and design, hosting, use of domain-specific languages, compiling and scaling.

The company was founded in late 2021 by Tomas Halgas, who sold his previous startup, the group chat app Sphere, to Twitter, alongside former Google and Facebook Product Manager Owen Campbell-Moore. The two have taken turns running the company, with Campbell-Moore at the head while Halgas worked at Twitter in its chaotic days leading up to Elon Musk’s takeover. Now, with Halgas having departed Twitter, he’s acting as CEO as Campbell-Moore has shifted to a day job at OpenAI.

Meet Grok: Elon Musk’s new AI chatbot that can access X

Grok is based on xAI’s proprietary “Grok” model, which Musk claims is one of the best in the market.

So it was eventually coming and now here it is. After initially announcing that Elon Musk’s xAI, an artificial intelligence company will release its new AI product. The company has surprised the AI community by launching its first AI model, named “Grok.” Grok is a chatbot that can converse with users on various topics using xAI’s proprietary “Grok” model. What makes Grok stand out from other language models, such as OpenAI’s GPT and Google’s PaLM, is its ability to access information from X, Musk’s popular social media platform (formerly known as Twitter), in real-time.


Elon Musk/X

It’s also based & loves sarcasm. I have no idea who could have guided it this way 🤷‍♂️ 🤣 pic.twitter.com/e5OwuGvZ3Z— Elon Musk (@elonmusk) November 4, 2023.