Toggle light / dark theme

The Unexpected Winners Of The ChatGPT Generative AI Revolution

OpenAI’s ChatGPT has taken the world like wildfire and continues to make headlines. However, the Generative Artificial Intelligence (GAI) has been around for a very long time. The technology was first pioneered in academia with Ian Goodfellow and Yoshua Bengio publishing their first seminal work on Generative Adversarial Networks in 2014 and then Google picked up the torch and published seminal papers and patents in both GANs and generative pre-trained transformers (GPT). In fact, my first paper on generative chemistry, was published in 2016, first granted patent in 2018, and the first AI-generated drug went through the first phase of clinical trials.


Forbes is one of the most reputable content providers on the planet and probably the most reputable when it comes to anything dealing with money. If Forbes does not classify you as a billionaire, you are not a billionaire. It has decades of high-quality expert-generated longitudinal text, and multimedia content in multiple languages. In addition to elite human reporters and editors, it also has a small army of content creators specializing in specific areas contributing to Forbes.com. For example, it is my 5th year as a contributor and I contribute regularly to keep the pencil sharp. This massive human intelligence may be partly repurposed to help develop internal generative resources within the Forbes empire, help curate the datasets and help train or benchmark third-party generative resources. I would gladly volunteer a small amount of time to such a task.

Nature and several other journals in the Nature Publishing Group portfolio are considered to be the Olympus in academic publishing. To publish in one of the elite Nature journals academics spend months and sometimes years going through the rounds of editorial and then peer-review. The quality of the data is questioned, all experimental data is disclosed, and the thousands or millions of dollars that went into the experiments are presented in the form of a paper and supplementary materials.

Having vast amounts of highest-quality data that is not available to the public gives Nature and other publishers the ability to either develop their own versions of ChatGPT, sell or license the data, and restructure the editorial and review processes to create more value for the future generative systems.

AI Startups Boom In San Francisco Amid $100 Billion Google Mistake

On stage in front of packed rooms, at lavish private venture capital dinners and over casual games of ping pong, San Francisco’s AI startup scene is blowing up. With tens of thousands of laid off software engineers with time to tinker, glistening empty buildings beckoning them to start something new, and billions of dollars in idle cash in need of investing, it’s no surprise that just weeks after viral AI companion ChatGPT made its jaw-dropping debut on Nov. 30, one of the smartest cities in the world would pick generative AI as the driver of its next economic boom.


San Francisco’s AI startup scene is blowing up thanks to the ChatGPT craze and backing from investors like Y Combinator, Bessemer, Coatue, Andreessen Horowitz, Tiger Global, Mark Benioff’s Time Ventures and Ashton Kutcher’s Sound Ventures. Meet the players who are shaking up the tech mecca.

Can AI really be protected from text-based attacks?

When Microsoft released Bing Chat, an AI-powered chatbot co-developed with OpenAI, it didn’t take long before users found creative ways to break it. Using carefully tailored inputs, users were able to get it to profess love, threaten harm, defend the Holocaust and invent conspiracy theories. Can AI ever be protected from these malicious prompts?

What set it off is malicious prompt engineering, or when an AI, like Bing Chat, that uses text-based instructions — prompts — to accomplish tasks is tricked by malicious, adversarial prompts (e.g. to perform tasks that weren’t a part of its objective. Bing Chat wasn’t designed with the intention of writing neo-Nazi propaganda. But because it was trained on vast amounts of text from the internet — some of it toxic — it’s susceptible to falling into unfortunate patterns.

Adam Hyland, a Ph.D. student at the University of Washington’s Human Centered Design and Engineering program, compared prompt engineering to an escalation of privilege attack. With escalation of privilege, a hacker is able to access resources — memory, for example — normally restricted to them because an audit didn’t capture all possible exploits.

How An Early Warning Radar Could Prevent Future Pandemics

On December 18, 2019, Wuhan Central Hospital admitted a patient with symptoms common for the winter flu season: a 65-year-old man with fever and pneumonia. AI Fen, director of the emergency department, oversaw a typical treatment plan, including antibiotics and anti-influenza drugs.

Six days later, the patient was still sick, and AI was puzzled, according to news reports and a detailed reconstruction of this period by evolutionary biologist Michael Worobey. The respiratory department decided to try to identify the guilty pathogen by reading its genetic code, a process called sequencing. They rinsed part of the patient’s lungs with saline, collected the liquid, and sent the sample to a biotech company. On December 27, the hospital got the results: The man had contracted a new coronavirus closely related to the one that caused the SARS outbreak that began 17 years before.

Lawrence Krauss: ChatGPT riddled with wokism, as it is programmed to avoid giving offence

As chatbot responses begin to proliferate throughout the Internet, they will, in turn, impact future machine learning algorithms that mine the Internet for information, thus perpetuating and amplifying the impact of the current programming biases evident in ChatGPT.

ChatGPT is admittedly a work in progress, but how the issues of censorship and offense ultimately play out will be important. The last thing anyone should want in the future is a medical diagnostic chatbot that refrains from providing a true diagnosis that may cause pain or anxiety to the receiver. Providing information guaranteed not to disturb is a sure way to squash knowledge and progress. It is also a clear example of the fallacy of attempting to input “universal human values” into AI systems, because one can bet that the choice of which values to input will be subjective.

If the future of AI follows the current trend apparent in ChatGPT, a more dangerous, dystopic machine-based future might not be the one portrayed in the Terminator films but, rather, a future populated by AI versions of Fahrenheit 451 firemen.

We’re All Gonna Die with Eliezer Yudkowsky

Eliezer Yudkowsky is an author, founder, and leading thinker in the AI space.


✨ DEBRIEF | Unpacking the episode:
https://shows.banklesshq.com/p/debrief-eliezer.


✨ COLLECTIBLES | Collect this episode:
https://collectibles.bankless.com/mint.


We wanted to do an episode on AI… and we went deep down the rabbit hole. As we went down, we discussed ChatGPT and the new generation of AI, digital superintelligence, the end of humanity, and if there’s anything we can do to survive.

This conversation with Eliezer Yudkowsky sent us into an existential crisis, with the primary claim that we are on the cusp of developing AI that will destroy humanity.

Be warned before diving into this episode, dear listener. Once you dive in, there’s no going back.

ChatGPT fails PSLE after acing Wharton Business School exam

Was given test for Singapore students and failed.


The publication compared ChatGPT with students who have taken the PSLE in the past three years using questions from the latest collection of past year papers that are available in bookstores.

ChatGPT received a mere 16 out of 100 points for three math papers, 21 points for the science papers, and 11 out of 20 for the English papers.

The bot was limited in that it was unable to answer questions that involved graphics or charts, receiving zero points for those sections. It, however, performed relatively better in answering questions that it could attempt to solve and managed to answer more than half of the questions in the maths papers and a quarter of the science papers’ questions, which predominantly included graphs as part of the questions.

Robots now have the sense of touch after electronic skin is created

A team of scientists has developed electronic skin that could pave the way for soft, flexible robotic devices to assist with surgical procedures or aid people’s mobility.

The creation of stretchable e-skin by Edinburgh researchers gives soft robots for the first time a level of physical self-awareness similar to that of people and animals.

The technology could aid breakthroughs in soft robotics by enabling devices to detect precisely their movement in the most sensitive of surroundings, experts say.

/* */