Toggle light / dark theme

Effective compression is about finding patterns to make data smaller without losing information. When an algorithm or model can accurately guess the next piece of data in a sequence, it shows it’s good at spotting these patterns. This links the idea of making good guesses—which is what large language models like GPT-4 do very well —to achieving good compression.

In an arXiv research paper titled “Language Modeling Is Compression,” researchers detail their discovery that the DeepMind large language model (LLM) called Chinchilla 70B can perform lossless compression on image patches from the ImageNet image database to 43.4 percent of their original size, beating the PNG algorithm, which compressed the same data to 58.5 percent. For audio, Chinchilla compressed samples from the LibriSpeech audio data set to just 16.4 percent of their raw size, outdoing FLAC compression at 30.3 percent.

In this case, lower numbers in the results mean more compression is taking place. And lossless compression means that no data is lost during the compression process. It stands in contrast to a lossy compression technique like JPEG, which sheds some data and reconstructs some of the data with approximations during the decoding process to significantly reduce file sizes.

Ribonucleic acid (RNA) is a molecule which is present in cells and made of genetic material to help build proteins necessary for cell function. RNA provides a template for the construction of proteins and is essential for cell and organism life. Immune cells rely on these proteins, including CD8+ or cytotoxic T cells which are responsible for killing invading pathogens. Importantly, cytotoxic T cells are a major component of the memory immune response. A pool of T cells specifically designed to recognize an invader is stored for future invasion of that particular pathogen. For example, once these cells are exposed to an invading antigen or protein, the immune system will expand T cells specific to that antigen and remember the antigen next time it enters the body. Vaccines work in a similar way by introducing a foreign antigen to the body, so the immune system is ready if the pathogen ever enters your body in the future. Only a small set of T cells that expand survive and it is unclear how this process occurs.

Recently a team of researchers at the University of Massachusetts Amherst (UMass) demonstrated that a single strand of RNA governs a T cells ability to recognize and kill tumors. The single strand of RNA is known as let-7 and is a microRNA, which is responsible for gene expression regulation. The recent discovery may improve vaccine development and cellular memory to enhance immunotherapy against cancers. Immunotherapy is a general term referring to cancer therapies that try to activate the immune system to kill the tumor compared to other drugs that try to directly kill the tumor with chemicals, such as chemotherapy.

The report published in Nature Communications identified that the microRNA, let-7, may enhance memory of T cells. Researchers led by Dr. Leonid Pobezinsky, Associate Professor of Veterinary and Animal Sciences at UMass, further built on our understanding of how T cells form immune memory. Pobezinsky and colleagues found that a small piece of microRNA that has been present throughout evolution is expressed in memory cells. Additionally, they found that more let-7 a cell has, the more likely that cell will recognize a cancer cell and kill it. The increased let-7 also indicates that the cell will turn into a memory cell after being exposed to an antigen. The regulation of enhanced memory T cells by let-7 is an integral process key to fight infections. This is a critical finding, especially because memory cells retain stem-like characteristics and can survive for decades.

Given the staggering pace of generative AI development, it’s no wonder that so many executives are tempted by the possibilities of AI, concerned about finding and retaining qualified workers, and humbled by recent market corrections or missed analyst expectations. They envision a future of work without nearly as many people as today. But this is a miscalculation. Leaders, understandably concerned about missing out on the next wave of technology, are unwittingly making risky bets on their companies’ futures. Here are steps every leader should take to prepare for an uncertain world where generative AI and human workforces coexist but will evolve in ways that are unknowable.

Page-utils class= article-utils—vertical hide-for-print data-js-target= page-utils data-id= tag: blogs.harvardbusiness.org, 2007/03/31:999.361032 data-title= How to Prepare for a GenAI Future You Can’t Predict data-url=/2023/08/how-to-prepare-for-a-genai-future-you-cant-predict data-topic= Strategic planning data-authors= Amy Webb data-content-type= Digital Article data-content-image=/resources/images/article_assets/2023/08/Aug23_31_1500235907-383x215.jpg data-summary=

A framework for making plans in the midst of great uncertainty.

That’s one way to talk about other human beings.

As writer Elizabeth Weil notes in a new profile of OpenAI CEO Sam Altman in New York Magazine, the powerful AI executive has a disconcerting penchant for using the term “median human,” a phrase that seemingly equates to a robotic tech bro version of “Average Joe.”

Altman’s hope is that artificial general intelligence (AGI) will have roughly the same intelligence as a “median human that you could hire as a co-worker.”

More than 400 years ago, Galileo showed that many everyday phenomena—such as a ball rolling down an incline or a chandelier gently swinging from a church ceiling—obey precise mathematical laws. For this insight, he is often hailed as the founder of modern science. But Galileo recognized that not everything was amenable to a quantitative approach. Such things as colors, tastes and smells “are no more than mere names,” Galileo declared, for “they reside only in consciousness.” These qualities aren’t really out there in the world, he asserted, but exist only in the minds of creatures that perceive them. “Hence if the living creature were removed,” he wrote, “all these qualities would be wiped away and annihilated.”

Since Galileo’s time the physical sciences have leaped forward, explaining the workings of the tiniest quarks to the largest galaxy clusters. But explaining things that reside “only in consciousness”—the red of a sunset, say, or the bitter taste of a lemon—has proven far more difficult. Neuroscientists have identified a number of neural correlates of consciousness —brain states associated with specific mental states—but have not explained how matter forms minds in the first place. As philosopher David Chalmers asked: “How does the water of the brain turn into the wine of consciousness?” He famously dubbed this quandary the “hard problem” of consciousness.

Scholars recently gathered to debate the problem at Marist College in Poughkeepsie, N.Y., during a two-day workshop focused on an idea known as panpsychism. The concept proposes that consciousness is a fundamental aspect of reality, like mass or electrical charge. The idea goes back to antiquity—Plato took it seriously—and has had some prominent supporters over the years, including psychologist William James and philosopher and mathematician Bertrand Russell. Lately it is seeing renewed interest, especially following the 2019 publication of philosopher Philip Goff’s book Galileo’s Error, which argues forcefully for the idea.

Hey and we are back … this is Max Flow and we will get to know more about the information limitations of psyche.

Neurons are living cells with a metabolism; they need oxygen and glucose to survive, and when they’ve been working hard, we experience fatigue. Every status update we read on social media, every tweet or text message we get from a friend, is competing for resources in our brains.

With such attentional restrictions, it’s clear why many of us feel overwhelmed by managing some of the most basic aspects of life. Our focus is short and erratic, our decision-making abilities go out the window and a list of unfinished projects begins to pile up.

Attention is the most essential mental resource for any organism. It determines which aspects of the environment we deal with, and most of the time, various automatic, subconscious processes make the correct choice about what gets passed through to our conscious awareness. For this to happen, millions of neurons are constantly monitoring the environment to select the most important things for us to focus on.