Toggle light / dark theme

We need to ķeep up with china in human enhancement and biotechnology.


Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world.

This presentation is from a private gathering in San Francisco on March 9th, 2023 with leading technologists and decision-makers with the ability to influence the future of large-language model A.I.s. This presentation was given before the launch of GPT-4.

We encourage viewers to consider calling their political representatives to advocate for holding hearings on AI risk and creating adequate guardrails.

For the podcast version, please visit: https://www.humanetech.com/podcast/the-ai-dilemma.

Although many people could benefit from cell therapy, many can’t get it. “The entire industry is focused on ensuring safety while increasing accessibility,” says Evan Zynda, PhD, senior staff scientist at Thermo Fisher Scientific. “When it comes to developing and manufacturing cell therapies, the main challenges to these goals are manufacturing inefficiencies, complex and manual processes that require human intervention and introduce failure modes, and a lack of standardized workflows for manufacturing, especially as emerging modalities are still being defined.”

In fact, making a cell therapy is extremely complicated. “We estimate the cell therapy–manufacturing process may have upwards of 40 process steps, which is not only labor intensive but creates opportunities for errors and contamination that lead to failures,” Zynda says. “By aseptically closing and automating the manufacturing process, we’re reducing the need for the highly specialized labor required to produce these therapies, thereby eliminating touchpoints, reducing expenses, and ultimately increasing the reproducibility and predictability of the process.”

Human sensory systems are very good at recognizing objects that we see or words that we hear, even if the object is upside down or the word is spoken by a voice we’ve never heard.

Computational models known as deep neural networks can be trained to do the same thing, correctly identifying an image of a dog regardless of what color its fur is, or a word regardless of the pitch of the speaker’s voice. However, a new study from MIT neuroscientists has found that these models often also respond the same way to images or words that have no resemblance to the target.

When these neural networks were used to generate an image or a word that they responded to in the same way as a specific natural input, such as a picture of a bear, most of them generated images or sounds that were unrecognizable to human observers. This suggests that these models build up their own idiosyncratic “invariances”—meaning that they respond the same way to stimuli with very different features.

Microsoft has unveiled a new dataset to help build interactive AI assistants for everyday tasks.

Extensive dataset of egocentric videos

According to Microsoft researchers Xin Wang and Neel Joshi, the dataset, called “HoloAssist,” is the first of its kind to include egocentric videos of humans performing physical tasks, as well as associated instructions from a human tutor.

The new book Minding the Brain from Discovery Institute Press is an anthology of 25 renowned philosophers, scientists, and mathematicians who seek to address that question. Materialism shouldn’t be the only option for how we think about ourselves or the universe at large. Contributor Angus Menuge, a philosopher from Concordia University Wisconsin, writes.

Neuroscience in particular has implicitly dualist commitments, because the correlation of brain states with mental states would be a waste of time if we did not have independent evidence that these mental states existed. It would make no sense, for example, to investigate the neural correlates of pain if we did not have independent evidence of the existence of pain from the subjective experience of what it is like to be in pain. This evidence, though, is not scientific evidence: it depends on introspection (the self becomes aware of its own thoughts and experiences), which again assumes the existence of mental subjects. Further, Richard Swinburne has argued that scientific attempts to show that mental states are epiphenomenal are self-refuting, since they require that mental states reliably cause our reports of being in those states. The idea, therefore, that science has somehow shown the irrelevance of the mind to explaining behavior is seriously confused.

The AI optimists can’t get away from the problem of consciousness. Nor can they ignore the unique capacity of human beings to reflect back on themselves and ask questions that are peripheral to their survival needs. Functions like that can’t be defined algorithmically or by a materialistic conception of the human person. To counter the idea that computers can be conscious, we must cultivate an understanding of what it means to be human. Then maybe all the technology humans create will find a more modest, realistic place in our lives.

At the same time, Mudrik has been trying to figure out what this diversity of theories means for AI. She’s working with an interdisciplinary team of philosophers, computer scientists, and neuroscientists who recently put out a white paper that makes some practical recommendations on detecting AI consciousness. In the paper, the team draws on a variety of theories to build a sort of consciousness “report card”—a list of markers that would indicate an AI is conscious, under the assumption that one of those theories is true. These markers include having certain feedback connections, using a global workspace, flexibly pursuing goals, and interacting with an external environment (whether real or virtual).

In effect, this strategy recognizes that the major theories of consciousness have some chance of turning out to be true—and so if more theories agree that an AI is conscious, it is more likely to actually be conscious. By the same token, a system that lacks all those markers can only be conscious if our current theories are very wrong. That’s where LLMs like LaMDA currently are: they don’t possess the right type of feedback connections, use global workspaces, or appear to have any other markers of consciousness.

The trouble with consciousness-by-committee, though, is that this state of affairs won’t last. According to the authors of the white paper, there are no major technological hurdles in the way of building AI systems that score highly on their consciousness report card. Soon enough, we’ll be dealing with a question straight out of science fiction: What should one do with a potentially conscious machine?

Adobe will premiere the first-ever TV commercial powered by its Firefly generative AI during high-profile sports broadcasts on Monday night. The commercial for Adobe Photoshop highlights creative capabilities enabled by the company’s AI technology.

Set to air during MLB playoffs and Monday Night Football, two of the most-watched live events on television, the new Adobe spot will showcase Photoshop’s Firefly-powered Generative Fill feature. Generative Fill uses AI to transform images based on text prompts.

With Adobe’s new commercial, generative AI will enter the mainstream spotlight, reaching audiences beyond just tech circles. While early adopters have embraced AI tools, a recent study found 44% of U.S. workers have yet to use generative AI, indicating its capabilities remain unknown to many.

A battery-less RFID tag could do the job just as well as a GPS landing module. The researchers have further refined how the tag works.

A collaboration between researchers at The University of Tokyo and telecommunications company NTT in Japan has led to the development of a radio-frequency identification (RFID)-based guidance system for autonomous drones, a press release said.

The use of drones for civil applications has been on the rise and is expected to increase further as countries become more liberal with airspace to be used by autonomous flying vehicles. Conventionally, drones have relied on imaging to determine their location, but as piloting control moves toward the machine from humans,… More.


Michael-rojek/iStock.