Toggle light / dark theme

The iPhone Moment of A.I. Has Started

The “iPhone moment for A.I.” hype takes many hues, but Nvidia is about the future of computing itself. NVIDIA DGX supercomputers, originally used as an AI research instrument, are now running 24/7 at businesses across the world to refine data and process AI.

While OpenAI gets a lot of the glory, I believe the credit should go to Nvidia. Launched late last year, ChatGPT went mainstream almost instantaneously, attracting over 100 million users, making it the fastest-growing application in history. “We are at the iPhone moment of AI,” Huang said. Nvidia makes about $6 to $7 Billion a fiscal quarter in revenue.

Nvidia said it’s offering a new set of cloud services that will allow businesses to create and use their own AI models based on their proprietary data and specific needs. The new services, called Nvidia AI Foundations, include three major components and are meant to accelerate enterprise adoption of generative AI: Enterprises can use Nvidia NeMo language service or Nvidia Picasso image, video and 3D service to gain access to foundation models that can generate text or images based on user inputs.

Microsoft Researchers Claim GPT-4 Is Showing “Sparks” of AGI

Fresh on the heels of GPT-4’s public release, a team of Microsoft AI scientists published a research paper claiming the OpenAI language model — which powers Microsoft’s now somewhat lobotomized Bing AI — shows “sparks” of human-level intelligence, or artificial general intelligence (AGI).

Emphasis on the “sparks.” The researchers are careful in the paper to characterize GPT-4’s prowess as “only a first step towards a series of increasingly generally intelligent systems” rather than fully-hatched, human-level AI. They also repeatedly highlighted the fact that this paper is based on an “early version” of GPT-4, which they studied while it was “still in active development by OpenAI,” and not necessarily the version that’s been wrangled into product-applicable formation.

Disclaimers aside, though, these are some serious claims to make. Though a lot of folks out there, even some within the AI industry, think of AGI as a pipe dream, others think that developing AGI will usher in the next era of humanity’s future; the next-gen GPT-4 is the most powerful iteration of the OpenAI-built Large Language Model (LLM) to date, and on the theoretical list of potential AGI contenders, GPT-4 is somewhere around the top of the list, if not number one.

AI Inception 🤯 New Groundbreaking AI from Stanford

This sure didn’t take long — a ChatGPT clone for your desktop.


In this video I discuss New AI Model developed by researchers from Stanford and how AI models train each other to get better. Exciting times!

Stanford Paper: https://crfm.stanford.edu/2023/03/13/alpaca.html.
Self-Instruct Paper: https://arxiv.org/abs/2212.10560
Twitter Thread: https://twitter.com/moahitis/status/1636019657898434561?s=20

Support me on Patreon: https://www.patreon.com/AnastasiInTech

Biohybrid robot made with mouse muscles successfully walks, might think and boink later

Robots in their current form contribute far more to our modern day life than you may realise. They may not be the sci-fi androids many imagine, but they’re hard at work doing tasks like building cars, or learning how to control nuclear fusion (opens in new tab). Only in recent years are we starting to see robots like you might have imagined as a kid, with Boston Dynamics’ creations doing all sorts of crazy stunts (opens in new tab) like dancing (opens in new tab) or guarding Pompeii (opens in new tab).

Robotics isn’t all about metal machines it turns out, and biohybrid robots may be part of our cyberpunk future too. It’s only been a few days since I was introduced to OSCAR, an artist’s rendition of a disgustingly meaty, pulsating flesh robot (opens in new tab). As wonderful and vivid as those videos are, it’s a good time to take a palette cleanser with a look at a real-world biohybrid robot.

Scientists at DeepMind and Meta Press Fusion of AI, Biology

“AlphaFold was a huge advance in protein structure prediction. We were inspired by the advances they made, which led to a whole new wave of using deep learning,” said Professor David Baker, a biochemist and computational biologist at the University of Washington.

“The advantage of ESMFold is that it is very fast, and so can be used to predict the structures of a larger set of proteins than AlphaFold, albeit with slightly lower accuracy, similar to that of RoseTTAFold,” Dr. Baker said, referring to a tool that emerged from his lab in 2021.

DeepMind open-sourced the code for AlphaFold2, making it freely available to the community. Nearly all proteins known to science—about 214 million—can be looked up in the public AlphaFold Protein Structure Database. Meta’s ESM Metagenomic Atlas includes 617 million proteins.

Lab-Grown Brain Learns Pong — Is This Biological Neural Network “Sentient”?

A leading neuroscientist claims that a pong-playing clump of about a million neurons is “sentient”. What does that mean? Why did they teach a lab-grown brain to play pong? To study biological self-organization at the root of life, intelligence, and consciousness. And, according to their website, “to see what happens.”

CORRECTIONS/Clarifications:
- The cells aren’t directly frozen in liquid nitrogen — they are put in vials and stored in liquid nitrogen: https://www.atcc.org/products/pcs-201-010
- The sentience of some invertebrates, like octopuses, is generally agreed upon. Prominent scientists affirmed non-human consciousness in the Cambridge Declaration on Consciousness: https://philiplow.foundation/consciousness/

DISCLAIMER: The explanations in this video are those proposed by the researchers, or my opinion. We are far from understanding how brains, or even neurons, work. The free energy principle is one of many potential explanations.

Support the channel: https://www.patreon.com/IhmCurious.

Footage from Cortical Labs: https://www.youtube.com/watch?v=neV3aZtTgVM
NASJAQ’s interview with founder Hon Weng Chong: https://www.youtube.com/watch?v=Y1R5k5QWPsY
Cortical Labs website: https://corticallabs.com.

Full paper on DishBrain: https://www.cell.com/neuron/fulltext/S0896-6273(22)00806-6

New LiGO technique accelerates training of large machine-learning models

It’s no secret that OpenAI’s ChatGPT has some incredible capabilities—for instance, the chatbot can write poetry that resembles Shakespearean sonnets or debug code for a computer program. These abilities are made possible by the massive machine-learning model that ChatGPT is built upon. Researchers have found that when these types of models become large enough, extraordinary capabilities emerge.

But bigger models also require more time and money to train. The training process involves showing hundreds of billions of examples to a model. Gathering so much data is an involved process in itself. Then come the monetary and of running many powerful computers for days or weeks to train a model that may have billions of parameters.

“It’s been estimated that training models at the scale of what ChatGPT is hypothesized to run on could take millions of dollars, just for a single training run. Can we improve the efficiency of these training methods, so we can still get good models in less time and for less money? We propose to do this by leveraging smaller language models that have previously been trained,” says Yoon Kim, an assistant professor in MIT’s Department of Electrical Engineering and Computer Science and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

/* */