Toggle light / dark theme

Study: Deep neural networks don’t see the world the way we do

Human sensory systems are very good at recognizing objects that we see or words that we hear, even if the object is upside down or the word is spoken by a voice we’ve never heard.

Computational models known as deep neural networks can be trained to do the same thing, correctly identifying an image of a dog regardless of what color its fur is, or a word regardless of the pitch of the speaker’s voice. However, a new study from MIT neuroscientists has found that these models often also respond the same way to images or words that have no resemblance to the target.

When these neural networks were used to generate an image or a word that they responded to in the same way as a specific natural input, such as a picture of a bear, most of them generated images or sounds that were unrecognizable to human observers. This suggests that these models build up their own idiosyncratic “invariances”—meaning that they respond the same way to stimuli with very different features.

Microsoft’s HoloAssist dataset brings AI assistants closer to our daily lives

Microsoft has unveiled a new dataset to help build interactive AI assistants for everyday tasks.

Extensive dataset of egocentric videos

According to Microsoft researchers Xin Wang and Neel Joshi, the dataset, called “HoloAssist,” is the first of its kind to include egocentric videos of humans performing physical tasks, as well as associated instructions from a human tutor.

So AI is “Slightly Conscious” Now?

The new book Minding the Brain from Discovery Institute Press is an anthology of 25 renowned philosophers, scientists, and mathematicians who seek to address that question. Materialism shouldn’t be the only option for how we think about ourselves or the universe at large. Contributor Angus Menuge, a philosopher from Concordia University Wisconsin, writes.

Neuroscience in particular has implicitly dualist commitments, because the correlation of brain states with mental states would be a waste of time if we did not have independent evidence that these mental states existed. It would make no sense, for example, to investigate the neural correlates of pain if we did not have independent evidence of the existence of pain from the subjective experience of what it is like to be in pain. This evidence, though, is not scientific evidence: it depends on introspection (the self becomes aware of its own thoughts and experiences), which again assumes the existence of mental subjects. Further, Richard Swinburne has argued that scientific attempts to show that mental states are epiphenomenal are self-refuting, since they require that mental states reliably cause our reports of being in those states. The idea, therefore, that science has somehow shown the irrelevance of the mind to explaining behavior is seriously confused.

The AI optimists can’t get away from the problem of consciousness. Nor can they ignore the unique capacity of human beings to reflect back on themselves and ask questions that are peripheral to their survival needs. Functions like that can’t be defined algorithmically or by a materialistic conception of the human person. To counter the idea that computers can be conscious, we must cultivate an understanding of what it means to be human. Then maybe all the technology humans create will find a more modest, realistic place in our lives.

Minds of machines: The great AI consciousness conundrum

At the same time, Mudrik has been trying to figure out what this diversity of theories means for AI. She’s working with an interdisciplinary team of philosophers, computer scientists, and neuroscientists who recently put out a white paper that makes some practical recommendations on detecting AI consciousness. In the paper, the team draws on a variety of theories to build a sort of consciousness “report card”—a list of markers that would indicate an AI is conscious, under the assumption that one of those theories is true. These markers include having certain feedback connections, using a global workspace, flexibly pursuing goals, and interacting with an external environment (whether real or virtual).

In effect, this strategy recognizes that the major theories of consciousness have some chance of turning out to be true—and so if more theories agree that an AI is conscious, it is more likely to actually be conscious. By the same token, a system that lacks all those markers can only be conscious if our current theories are very wrong. That’s where LLMs like LaMDA currently are: they don’t possess the right type of feedback connections, use global workspaces, or appear to have any other markers of consciousness.

The trouble with consciousness-by-committee, though, is that this state of affairs won’t last. According to the authors of the white paper, there are no major technological hurdles in the way of building AI systems that score highly on their consciousness report card. Soon enough, we’ll be dealing with a question straight out of science fiction: What should one do with a potentially conscious machine?

Adobe To Debut First TV Ad Powered By Firefly AI During Sports Primetime

Adobe will premiere the first-ever TV commercial powered by its Firefly generative AI during high-profile sports broadcasts on Monday night. The commercial for Adobe Photoshop highlights creative capabilities enabled by the company’s AI technology.

Set to air during MLB playoffs and Monday Night Football, two of the most-watched live events on television, the new Adobe spot will showcase Photoshop’s Firefly-powered Generative Fill feature. Generative Fill uses AI to transform images based on text prompts.

With Adobe’s new commercial, generative AI will enter the mainstream spotlight, reaching audiences beyond just tech circles. While early adopters have embraced AI tools, a recent study found 44% of U.S. workers have yet to use generative AI, indicating its capabilities remain unknown to many.

New tech can guide drones without relying on cameras, GPS

A battery-less RFID tag could do the job just as well as a GPS landing module. The researchers have further refined how the tag works.

A collaboration between researchers at The University of Tokyo and telecommunications company NTT in Japan has led to the development of a radio-frequency identification (RFID)-based guidance system for autonomous drones, a press release said.

The use of drones for civil applications has been on the rise and is expected to increase further as countries become more liberal with airspace to be used by autonomous flying vehicles. Conventionally, drones have relied on imaging to determine their location, but as piloting control moves toward the machine from humans,… More.


Michael-rojek/iStock.

New AI tool successfully detects and classifies supernova

A new feat has been achieved in the realm of astronomy. The first supernova was observed, recognized, and classified using a wholly automated approach without human participation.

Led by Northwestern University, an international team of scientists has created a cutting-edge artificial intelligence (AI) tool known as the Bright Transient Survey Bot (BTSbot).


NASA/JPL-Caltech / D. Lang (Perimeter Institute)

The first supernova was observed, recognized, and classified using a wholly automated approach without human participation.

AI Deciphers Ancient Scroll Buried in The Ashes of Mount Vesuvius

As you might imagine for a scroll that has been buried under mounds of volcanic ash from Mount Vesuvius for close to 2,000 years, the rolled-up papyrus excavated from the ancient Roman city of Herculaneum is rather difficult to open, let alone read – but AI has found a way.

Scholars from the University of Kentucky launched the Vesuvius Challenge in March, releasing thousands of X-ray images of charred, carbonized Herculaneum scrolls together with untrained artificial intelligence software that could be used to interpret the scans.

Now two students have claimed the first prizes to be awarded: Luke Farritor, a computer science student at the University of Nebraska-Lincoln, and Youssef Nader, a biorobotics grad student at the Free University of Berlin in Germany.

Automated Production for Cell and Gene Therapy Developers

Automation and sector-wide collaboration will be critical as developers try to move beyond the production challenges that slow growth of the cell and gene therapy sector. So says Julie G. Allickson, PhD, director of Mayo Clinic’s Center for Regenerative Biotherapeutics who argues that, despite considerable investment in infrastructure, production is still the biggest challenge.

“Both industry and academia are challenged by the lack of manufacturing capacity for cell and gene therapies,” she says, citing plasmid production and viral vector production as examples. “Besides these issues, the scalability of production processes can be difficult, especially when coupled to individually expanded cells. When looking at the patient cells variability, quantity and quality of cells is critical to ensure consistency in the product delivered to the patient,” she says.

/* */