Toggle light / dark theme

Like Humans, This Breakthrough AI Makes Concepts Out of the Words It Learns

Even as toddlers, we have an uncanny ability to turn what we learn about the world into concepts. With just a few examples, we form an idea of what makes a “dog” or what it means to “jump” or “skip.” These concepts are effortlessly mixed and matched inside our heads, resulting in a toddler pointing at a prairie dog and screaming, “But that’s not a dog!”

Last week, a team from New York University created an AI model that mimics a toddler’s ability to generalize language learning. In a nutshell, generalization is a sort of flexible thinking that lets us use newly learned words in new contexts—like an older millennial struggling to catch up with Gen Z lingo.

When pitted against adult humans in a language task for generalization, the model matched their performance. It also beat GPT-4, the AI algorithm behind ChatGPT.

Research claims novel algorithm can exactly compute information rate for any system

75 years ago Claude Shannon, the “father of information theory,” showed how information transmission can be quantified mathematically, namely via the so-called information transmission rate.

Yet, until now this quantity could only be computed approximately. AMOLF researchers Manuel Reinhardt and Pieter Rein ten Wolde, together with a collaborator from Vienna, have now developed a simulation technique that—for the first time—makes it possible to compute the information rate exactly for any system. The researchers have published their results in the journal Physical Review X.

To calculate the information rate exactly, the AMOLF researchers developed a novel simulation algorithm. It works by representing a complex physical system as an interconnected network that transmits the information via connections between its nodes. The researchers hypothesized that by looking at all the different paths the information can take through this network, it should be possible to obtain the information rate exactly.

Memes, Genes, and Brain Viruses

Go to https://brilliant.org/EmergentGarden to get a 30-day free trial + the first 200 people will get 20% off their annual subscription.

Welcome to my meme complication video lolz rolf!!1! if you laugh you lose and you have to start the video over!! or if you blink or utilize any facial muscles! failing to restart or share the video will result in infinite suffering, success will result in infinite bliss.
☻/
/ ▌ copy me.
/ \

My Links.
Patreon: https://www.patreon.com/emergentgarden.
Discord: https://discord.gg/ZsrAAByEnr.
Music: https://youtube.com/@acolyte-compositions.

Sources:
Richard Dawkins ted: https://www.youtube.com/watch?v=mvJVseKdlJA
Orange Justice: https://www.youtube.com/watch?v=i0E10FASz2c.
Starling imitating camera: https://www.youtube.com/watch?v=S4IDaGo6a40
Baby imitating clapping: https://www.youtube.com/watch?v=uq1eZ8VZpSc.
Cinnamon challenge guy: https://www.youtube.com/watch?v=IqmhuYtldd0
Dawkins “It works…bitches”: https://www.youtube.com/watch?v=0OtFSDKrq88
Virus reproduction animation: https://www.youtube.com/watch?v=QHHrph7zDLw.
DNA replication animation: https://www.youtube.com/watch?v=7Hk9jct2ozY
Old bicycle: https://www.youtube.com/watch?v=p–RKyDFIAk.
Charlie bit me: https://www.youtube.com/watch?v=0EqSXDwTq6U
Vsauce fingers: https://www.youtube.com/watch?v=UMwEbsbLw3k.
Unknown Video (do not watch): https://www.youtube.com/watch?v=dQw4w9WgXcQ
The Internette: https://www.youtube.com/watch?v=Y5BZkaWZAAA
Beirut 2020 explosion: https://www.youtube.com/watch?v=LNDhIGR-83w.
Rotating Duck: https://www.youtube.com/watch?v=ggDbDRUauzo.
Free Real Estate: https://www.youtube.com/watch?v=cd4-UnU8lWY
Do I look like I know what a jpeg is: https://www.youtube.com/watch?v=QEzhxP-pdos.
Free Real Estate asmr meme: https://www.youtube.com/watch?v=CCSE6x8T0jc.
Silly red bird: https://www.youtube.com/watch?v=kUJw2eVYznw.
Silly siamangs: https://www.youtube.com/watch?v=-JUhUI_KvUI
Strong vsauce: https://www.youtube.com/shorts/crVjtJyc-r4
Jazz Frog TikTok: https://www.youtube.com/watch?v=0BWiMF8_v6M
Barry Algorithm: https://www.youtube.com/watch?v=ktAbh39aoU8
AI Pizza Ad: https://www.youtube.com/watch?v=qSewd6Iaj6I
Dawkins weird meme song: https://www.youtube.com/watch?v=2tIwYNioDL8
I could not realistically credit everything.

Timestamps.

New AI Model Counters Bias In Data With A DEI Lens

AI has exploded onto the scene in recent years, bringing both promise and peril. Systems like ChatGPT and Stable Diffusion showcase the tremendous potential of AI to enhance productivity and creativity. Yet they also reveal a dark reality: the algorithms often reflect the same systemic prejudices and societal biases present in their training data.

While the corporate world has quickly capitalized on integrating generative AI systems, many experts urge caution, considering the critical flaws in how AI represents diversity. Whether it’s text generators reinforcing stereotypes or facial recognition exhibiting racial bias, the ethical challenges cannot be ignored.


From generating text that furthers stereotypes to producing discriminatory facial recognition results, biased AI poses ethical and social challenges.

AI-ready architecture doubles power with FeFETs

Hussam Amrouch has developed an AI-ready architecture that is twice as powerful as comparable in-memory computing approaches. As reported in the journal Nature Communications (“First demonstration of in-memory computing crossbar using multi-level Cell FeFET”), the professor at the Technical University of Munich (TUM) applies a new computational paradigm using special circuits known as ferroelectric field effect transistors (FeFETs). Within a few years, this could prove useful for generative AI, deep learning algorithms and robotic applications.

  • The new architecture enables both data storage and calculations to be carried out on the same transistors, boosting efficiency and reducing heat.
  • The chip performs at 885 TOPS/W, significantly outperforming current CMOS chips which operate in the range of 10–20 TOPS/W, making it ideal for applications like real-time drone calculations, generative AI, and deep learning algorithms.
  • Atom Computing is the first to announce a 1,000+ qubit quantum computer

    How many qubits do we have to have in a quantum computer and accessble to a wide market to trully have something scfi worthy?


    Today, a startup called Atom Computing announced that it has been doing internal testing of a 1,180 qubit quantum computer and will be making it available to customers next year. The system represents a major step forward for the company, which had only built one prior system based on neutral atom qubits—a system that operated using only 100 qubits.

    The error rate for individual qubit operations is high enough that it won’t be possible to run an algorithm that relies on the full qubit count without it failing due to an error. But it does back up the company’s claims that its technology can scale rapidly and provides a testbed for work on quantum error correction. And, for smaller algorithms, the company says it’ll simply run multiple instances in parallel to boost the chance of returning the right answer.

    Computing with atoms

    Atom Computing, as its name implies, has chosen neutral atoms as its qubit of choice (there are other companies that are working with ions). These systems rely on a set of lasers that create a series of locations that are energetically favorable for atoms. Left on their own, atoms will tend to fall into these locations and stay there until a stray gas atom bumps into them and knocks them out.

    Eureka: With GPT-4 overseeing training, robots can learn much faster

    On Friday, researchers from Nvidia, UPenn, Caltech, and the University of Texas at Austin announced Eureka, an algorithm that uses OpenAI’s GPT-4 language model for designing training goals (called “reward functions”) to enhance robot dexterity. The work aims to bridge the gap between high-level reasoning and low-level motor control, allowing robots to learn complex tasks rapidly using massively parallel simulations that run through trials simultaneously. According to the team, Eureka outperforms human-written reward functions by a substantial margin.

    “Leveraging state-of-the-art GPU-accelerated simulation in Nvidia Isaac Gym,” writes Nvidia on its demonstration page, “Eureka is able to quickly evaluate the quality of a large batch of reward candidates, enabling scalable search in the reward function space.

    Finding flows of a Navier–Stokes fluid through quantum computing

    face_with_colon_three This looks awesome :3.


    There is great interest in using quantum computers to efficiently simulate a quantum system’s dynamics as existing classical computers cannot do this. Little attention, however, has been given to quantum simulation of a classical nonlinear continuum system such as a viscous fluid even though this too is hard for classical computers. Such fluids obey the Navier–Stokes nonlinear partial differential equations, whose solution is essential to the aerospace industry, weather forecasting, plasma magneto-hydrodynamics, and astrophysics. Here we present a quantum algorithm for solving the Navier–Stokes equations. We test the algorithm by using it to find the steady-state inviscid, compressible flow through a convergent-divergent nozzle when a shockwave is (is not) present.

    Artificial intelligence predicts the future of artificial intelligence research

    It has become nearly impossible for human researchers to keep track of the overwhelming abundance of scientific publications in the field of artificial intelligence and to stay up-to-date with advances.

    Scientists in an international team led by Mario Krenn from the Max-Planck Institute for the Science of Light have now developed an AI algorithm that not only assists researchers in orienting themselves systematically but also predictively guides them in the direction in which their own research field is likely to evolve. The work was published in Nature Machine Intelligence.

    In the field of artificial intelligence (AI) and (ML), the number of is growing exponentially and approximately doubling every 23 months. For human researchers, it is nearly impossible to keep up with progress and maintain a comprehensive overview.

    Redefining the Fabric of Reality: The Growing Evidence for a Simulated Universe

    New research on information entropy may offer evidence for the theory that our universe is a sophisticated simulation, with deep implications for various fields, from biology to cosmology.

    The simulated universe theory implies that our universe, with all its galaxies, planets and life forms, is a meticulously programmed computer simulation. In this scenario, the physical laws governing our reality are simply algorithms. The experiences we have are generated by the computational processes of an immensely advanced system.

    While inherently speculative, the simulated universe theory has gained attention from scientists and philosophers due to its intriguing implications. The idea has made its mark in popular culture, across movies, TV shows, and books – including the 1999 film The Matrix.

    /* */