Have you ever looked at something and been creeped out by it’s almost, but not quite, human-like appearance? Be honest — does the Sophia robot creep you out?People find things that are human like, but not quite human, to be creepy. The feeling of creepiness can range from robots, to CGI animation, animatronics like you see at theme parks, dolls or even digital assistants. This concept is called the “uncanny valley”. And, believe it or not, is actually a particularly significant reason why many AI projects are failing.
The uncanny valley is the relationship between the degree of an object’s resemblance to being human and then humans emotional response to that object.
Midjourney is one of the leading drivers of the emerging technology of using artificial intelligence (AI) to create visual imagery from text prompts. The San Francisco-based startup recently made news as the engine behind the artwork that won an award in a Colorado state fair competition, and that’s unlikely to be the last complicated issue that AI art will face in the coming years.
Midjourney differentiates from others in the space by emphasizing the painterly aesthetics in the images it produces.
Serial entrepreneur David Holz explains the goals and methods of the revolutionary text-to-image platform and his vision for the future of human imagination.
Earlier this summer, a piece generated by an AI text-to-image application won a prize in a state fair art competition, prying open a Pandora’s Box of issues about the encroachment of technology into the domain of human creativity and the nature of art itself.
Art professionals are increasingly concerned that text-to-image platforms will render hundreds of thousands of well-paid creative jobs obsolete.
From assembly lines to warehouses, robots have been used to automate processes at work for decades. Today, the demand for automation is growing rapidly, but there’s one big problem: today’s robots are expensive to build and complicated to set up. They have complex hardware and need programming by skilled engineers to perform specific tasks. In order to meet demand in an increasingly automated world, robotic arms have to be more affordable, lighterweight, and easier to use.
Ally Robotics, a startup specializing in AI-powered robotic arms, is working on doing just that. Ally is ushering in a new era of robotics—and giving early investors a unique opportunity to join the golden age.
Conversational AI is a subset of artificial intelligence (AI) that allows consumers to interact with computer applications as if they were interacting with another human. According to Deloitte, the global conversational AI market is set to grow by 22% between 2022 and 2025 and is estimated to reach $14 billion by 2025.
Providing enhanced language customizations to cater to a highly diverse and vast group of hyper-local audiences, many practical applications of this include financial services, hospital wards and conferences, and can take the form of a translation app or a chatbot. According to Gartner, 70% of white-collar workers purportedly regularly interact with conversational platforms, but this is just a drop in the ocean of what can unfold this decade.
Despite the exciting potential within the AI space, there is one significant hurdle; the data used to train conversational AI models does not adequately account for the subtleties of dialect, language, speech patterns and inflection.
“The reality is, this is an alien technology that allows for superpowers,” Emad Mostaque, CEO of Stability AI, the company that has funded the development of Stable Diffusion, tells The Verge. “We’ve seen three-year-olds to 90-year–olds able to create for the first time. But we’ve also seen people create amazingly hateful things.”
Although momentum behind AI-generated art has been building for a while, the release of Stable Diffusion might be the moment the technology really takes off. It’s free to use, easy to build on, and puts fewer barriers in the way of what users can generate. That makes what happens next difficult to predict.
The key difference between Stable Diffusion and other AI art generators is the focus on open source. Even Midjourney — another text-to-image model that’s being built outside of the Big Tech compound — doesn’t offer such comprehensive access to its software.
The National Institutes of Health will invest $130 million over four years, pending the availability of funds, to accelerate the widespread use of artificial intelligence (AI) by the biomedical and behavioral research communities. The NIH Common Fund’s Bridge to Artificial Intelligence (Bridge2AI) program is assembling team members from diverse disciplines and backgrounds to generate tools, resources, and richly detailed data that are responsive to AI approaches. At the same time, the program will ensure its tools and data do not perpetuate inequities or ethical problems that may occur during data collection and analysis. Through extensive collaboration across projects, Bridge2AI researchers will create guidance and standards for the development of ethically sourced, state-of-the-art, AI-ready data sets that have the potential to help solve some of the most pressing challenges in human health — such as uncovering how genetic, behavioral, and environmental factors influence a person’s physical condition throughout their life.
“Generating high-quality ethically sourced data sets is crucial for enabling the use of next-generation AI technologies that transform how we do research,” said Lawrence A. Tabak, D.D.S., Ph.D., Performing the Duties of the Director of NIH. “The solutions to long-standing challenges in human health are at our fingertips, and now is the time to connect researchers and AI technologies to tackle our most difficult research questions and ultimately help improve human health.”
Social media users may trust artificial intelligence (AI) as much as human editors to flag hate speech and harmful content, according to researchers at Penn State.
The researchers said that when users think about positive attributes of machines, like their accuracy and objectivity, they show more faith in AI. However, if users are reminded about the inability of machines to make subjective decisions, their trust is lower.
The findings may help developers design better AI-powered content curation systems that can handle the large amounts of information currently being generated while avoiding the perception that the material has been censored, or inaccurately classified, said S. Shyam Sundar, James P. Jimirro Professor of Media Effects in the Donald P. Bellisario College of Communications and co-director of the Media Effects Research Laboratory.