Newswise — Most of modern medicine has physical tests or objective techniques to define much of what ails us. Yet, there is currently no blood or genetic test, or impartial procedure that can definitively diagnose a mental illness, and certainly none to distinguish between different psychiatric disorders with similar symptoms. Experts at the University of Tokyo are combining machine learning with brain imaging tools to redefine the standard for diagnosing mental illnesses.
“Psychiatrists, including me, often talk about symptoms and behaviors with patients and their teachers, friends and parents. We only meet patients in the hospital or clinic, not out in their daily lives. We have to make medical conclusions using subjective, secondhand information,” explained Dr. Shinsuke Koike, M.D., Ph.D., an associate professor at the University of Tokyo and a senior author of the study recently published in Translational Psychiatry.
“Frankly, we need objective measures,” said Koike.
For decades, Hollywood has made millions off of our fears that artificial intelligences such as HAL in 2001: A Space Odyssey and Skynet in The Terminator could one day control us or even wipe out humanity.
Two Chinese air force J-20 stealth fighters have appeared at an air base in China’s far west as the mountain stand-off between India and Chine enters its fourth month.
The twin-engine J-20s are visible in commercial satellite imagery of Hotan air base, in the Uighur autonomous region of Xinjiang. Chinese social-media users first spotted the planes.
The J-20 deployment, however temporary, signals Beijing’s resolve as China wrestles with India for influence over a disputed region of the Himalayas. But a pair of warplanes, no matter how sophisticated, don’t represent much actual combat power.
A program that can automate website development. A bot that writes letters on behalf of nature. An AI-written blog that trended on Hacker News. Those are just some of the recent stories written about GPT-3, the latest contraption of artificial intelligence research lab OpenAI. GPT-3 is the largest language model ever made, and it has triggered many discussions over how AI will soon transform many industries.
But what has been less discussed is how GPT-3 has transformed OpenAI itself. In the process of creating the most successful natural language processing system ever created, OpenAI has gradually morphed from a nonprofit AI lab to a company that sells AI services.
The lab is in a precarious position, torn between conflicting goals: developing profitable AI services and pursuing human-level AI for the benefit of all. And hanging in the balance is the very mission for which OpenAI was founded.
Long before coronavirus appeared and shattered our pre-existing “normal,” the future of work was a widely discussed and debated topic. We’ve watched automation slowly but surely expand its capabilities and take over more jobs, and we’ve wondered what artificial intelligence will eventually be capable of.
The pandemic swiftly turned the working world on its head, putting millions of people out of a job and forcing millions more to work remotely. But essential questions remain largely unchanged: we still want to make sure we’re not replaced, we want to add value, and we want an equitable society where different types of work are valued fairly.
To address these issues—as well as how the pandemic has impacted them—this week Singularity University held a digital summit on the future of work. Forty-three speakers from multiple backgrounds, countries, and sectors of the economy shared their expertise on everything from work in developing markets to why we shouldn’t want to go back to the old normal.
Aside from staying alive and healthy, the biggest concern most people have during the pandemic is the future of their jobs. Unemployment in the U.S. has skyrocketed, from 5.8 million in February 2020 to 16.3 million in July 2020, according to the U.S. Bureau of Labor Statistics. But it’s not only the lost jobs that are reshaping work in the wake of COVID-19; the nature of many of the remaining jobs has changed, as remote work becomes the norm. And in the midst of it all, automation has become potentially a threat to some workers and a salvation to others. In this issue, we examine this tension and explore the good, bad, and unknown of how automation could affect jobs in the immediate and near future.
Prevailing wisdom says that the wave of new AI-powered automation will follow the same pattern as other technological leaps: They’ll kill off some jobs but create new (and potentially better) ones. But it’s unclear whether that will hold true this time around. Complicating matters is that at a time when workplace safety has to do with limiting the spread of a deadly virus, automation can play a role in reducing the number of people who are working shoulder-to-shoulder — keeping workers safe, but also eliminating jobs.
Even as automation creates exciting new opportunities, it’s important to bear in mind that those opportunities will not be distributed equally. Some jobs are more vulnerable to automation than others, and uneven access to reskilling and other crucial factors will mean that some workers will be left behind.
Virtual assistants and robots are becoming increasingly sophisticated, interactive and human-like. To fully replicate human communication, however, artificial intelligence (AI) agents should not only be able to determine what users are saying and produce adequate responses, they should also mimic humans in the way they speak.
Researchers at Carnegie Mellon University (CMU) have recently carried out a study aimed at improving how virtual assistants and robots communicate with humans by generating natural gestures to accompany their speech. Their paper, pre-published on arXiv and set to be presented at the European Conference on Computer Vision (ECCV) 2020, introduces Mix-StAGE, a new model that can produce different styles of co-speech gestures that best match the voice of a speaker and what he/she is saying.
“Imagine a situation where you are communicating with a friend in a virtual space through a virtual reality headset,” Chaitanya Ahuja, one of the researchers who carried out the study, told TechXplore. “The headset is only able to hear your voice, but not able to see your hand gestures. The goal of our model is to predict the hand gestures accompanying the speech.”
A team of researchers at Stanford University has created an artificial intelligence-based player called the Vid2Player that is capable of generating startlingly realistic tennis matches—featuring real professional players. They have written a paper describing their work and have uploaded it to the arXiv preprint server. They have also uploaded a YouTube video demonstrating their player.
Video game companies have put a lot of time and effort into making their games look realistic, but thus far, have found it tough going when depicting human beings. In this new effort, the researchers have taken a different approach to the task—instead of trying to create human-looking characters from scratch, they use sprites, which are characters based on video of real people. The sprites are then pushed into action by a computer using artificial intelligence to mimic the ways a human being moves while playing tennis. The researchers trained their AI system using video of real tennis professionals performing; the footage also provided imagery for the creation of sprites. The result is an interactive player that depicts real professional tennis players such as Roger Federer, Serena Williams, Novak Jovovich and Rafael Nadal in action. Perhaps most importantly, the simulated gameplay is virtually indistinguishable from a televised tennis match.
The Vid2Player is capable of replaying actual matches, but because it is interactive, a user can change the course of the match as it unfolds. Users can change how a player reacts when a ball comes over the net, for example, or how a player plays in general. They can decide which part of the opposite side of the court to aim for, or whether to hit backhand or forehand. They can also slightly alter the course of a real match by allowing a shot that in reality was out of bounds to land magically inside the line. The system also allows for players from different eras to compete. The AI software adjusts for lighting and clothing (if video is used from multiple matches). Because AI software is used to teach the sprites how to play, the actions of the sprites actually mimic the most likely actions of the real player.