Our deepfake problem is about to get worse: Samsung engineers have now developed realistic talking heads that can be generated from a single image, so AI can even put words in the mouth of the Mona Lisa.
The new algorithms, developed by a team from the Samsung AI Center and the Skolkovo Institute of Science and Technology, both in Moscow work best with a variety of sample images taken at different angles – but they can be quite effective with just one picture to work from, even a painting.
Scientists from Maastricht University have developed a method to look into the brain of a person and read out who has spoken to him or her and what was said. With the help of neuroimaging and data mining techniques the researchers mapped the brain activity associated with the recognition of speech sounds and voices.
In their Science article “‘Who’ is Saying ‘What’? Brain-Based Decoding of Human Voice and Speech,” the four authors demonstrate that speech sounds and voices can be identified by means of a unique ‘neural fingerprint’ in the listener’s brain. In the future this new knowledge could be used to improve computer systems for automatic speech and speaker recognition.
Seven study subjects listened to three different speech sounds (the vowels /a/, /i/ and /u/), spoken by three different people, while their brain activity was mapped using neuroimaging techniques (fMRI). With the help of data mining methods the researchers developed an algorithm to translate this brain activity into unique patterns that determine the identity of a speech sound or a voice. The various acoustic characteristics of vocal cord vibrations (neural patterns) were found to determine the brain activity.
Scientists found a way to make sense of particularly chaotic events in nature.
Thanks to a new set of equations for modeling turbulence, scientists can now better predict things like how galaxies form in distant space, complex weather patterns here on Earth, and nuclear fusion. According to the research, published this Spring in the journal Physical Review Letters, turbulence may start out chaotic but then falls into a more uniform pattern that scientists can readily model and understand.
Rutgers computer scientists used artificial intelligence to control a robotic arm that provides a more efficient way to pack boxes, saving businesses time and money.
Bekris, Abdeslam Boularias and Jingjin Yu, both assistant professors of computer science, formed a team to deal with multiple aspects of the robot packing problem in an integrated way through hardware, 3D perception and robust motion.
The field has narrowed in the race to protect sensitive electronic information from the threat of quantum computers, which one day could render many of our current encryption methods obsolete.
As the latest step in its program to develop effective defenses, the National Institute of Standards and Technology (NIST) has winnowed the group of potential encryption tools—known as cryptographic algorithms—down to a bracket of 26. These algorithms are the ones NIST mathematicians and computer scientists consider to be the strongest candidates submitted to its Post-Quantum Cryptography Standardization project, whose goal is to create a set of standards for protecting electronic information from attack by the computers of both tomorrow and today.
“These 26 algorithms are the ones we are considering for potential standardization, and for the next 12 months we are requesting that the cryptography community focus on analyzing their performance,” said NIST mathematician Dustin Moody. “We want to get better data on how they will perform in the real world.”
China isn’t the only country with a draconian “social credit score” system — there’s one quite a bit like it operating in the U.S. Except that it’s being run by American businesses, not the government.
There’s plenty of evidence that retailers have been using a technique called “surveillance scoring” for decades in which consumers are given a secret score by an algorithm to give them a different price — but for the same goods and services.
But the practice might be illegal after all: a California nonprofit called Consumer Education Foundation (CEF) filed a petition yesterday asking for the Federal Trade Commission (FTC) to look into the shady practice.
Artificial Intelligence (AI) is an emerging field of computer programming that is already changing the way we interact online and in real life, but the term ‘intelligence’ has been poorly defined. Rather than focusing on smarts, researchers should be looking at the implications and viability of artificial consciousness as that’s the real driver behind intelligent decisions.
Consciousness rather than intelligence should be the true measure of AI. At the moment, despite all our efforts, there’s none.
Significant advances have been made in the field of AI over the past decade, in particular with machine learning, but artificial intelligence itself remains elusive. Instead, what we have is artificial serfs—computers with the ability to trawl through billions of interactions and arrive at conclusions, exposing trends and providing recommendations, but they’re blind to any real intelligence. What’s needed is artificial awareness.
Elon Musk has called AI the “biggest existential threat” facing humanity and likened it to “summoning a demon,”[1] while Stephen Hawking thought it would be the “worst event” in the history of civilization and could “end with humans being replaced.”[2] Although this sounds alarmist, like something from a science fiction movie, both concerns are founded on a well-established scientific premise found in biology—the principle of competitive exclusion.[3]
It is now possible to take a talking-head style video, and add, delete or edit the speaker’s words as simply as you’d edit text in a word processor. A new deepfake algorithm can process the audio and video into a new file in which the speaker says more or less whatever you want them to.
For as smart as artificial intelligence systems seem to get, they’re still easily confused by hackers who launch so-called adversarial attacks — cyberattacks that trick algorithms into misinterpreting their training data, sometimes to disastrous ends.
In order to bolster AI’s defenses from these dangerous hacks, scientists at the Australian research agency CSIRO say in a press release they’ve created a sort of AI “vaccine” that trains algorithms on weak adversaries so they’re better prepared for the real thing — not entirely unlike how vaccines expose our immune systems to inert viruses so they can fight off infections in the future.