Toggle light / dark theme

“The reality is, this is an alien technology that allows for superpowers,” Emad Mostaque, CEO of Stability AI, the company that has funded the development of Stable Diffusion, tells The Verge. “We’ve seen three-year-olds to 90-year–olds able to create for the first time. But we’ve also seen people create amazingly hateful things.”

Although momentum behind AI-generated art has been building for a while, the release of Stable Diffusion might be the moment the technology really takes off. It’s free to use, easy to build on, and puts fewer barriers in the way of what users can generate. That makes what happens next difficult to predict.

The key difference between Stable Diffusion and other AI art generators is the focus on open source. Even Midjourney — another text-to-image model that’s being built outside of the Big Tech compound — doesn’t offer such comprehensive access to its software.

Back in February, SpaceX introduced a high-performance satellite internet solution that was specifically designed for businesses. The higher-tier kit is a step above the standard Starlink system, with its larger dish that features double the antenna capability to its faster internet speeds.

Inasmuch as the service was attractive, however, there was one issue: it costs a whopping $2,500 for the high-performance satellite internet system and a very premium $500 per month for the service itself. As per recent updates to the official Starlink website, a high-performance satellite internet kit is now available for residential customers as well, and it doesn’t even require a $500 monthly fee.

While residential customers who wish to acquire Starlink’s high-performance satellite internet kit would still be paying $2,500 for the hardware, they only need to pay the standard $110 per month for the service itself. Residential customers who opt in for a high-performance dish could then get more out of Starlink’s capabilities for the same monthly fee.

In a study that truly underscores the profound and devastating impact humans have on the environment, researchers have found that microscopic bugs are evolving to eat plastic.

The study, published in the journal Microbial Ecology, found a growing number of microbes that have evolved to carry a plastic-degrading enzyme. These bugs could hold the key to creating enzymes that break down specific plastics and alleviate the detrimental effects of anthropogenic pollution.

The National Institutes of Health will invest $130 million over four years, pending the availability of funds, to accelerate the widespread use of artificial intelligence (AI) by the biomedical and behavioral research communities. The NIH Common Fund’s Bridge to Artificial Intelligence (Bridge2AI) program is assembling team members from diverse disciplines and backgrounds to generate tools, resources, and richly detailed data that are responsive to AI approaches. At the same time, the program will ensure its tools and data do not perpetuate inequities or ethical problems that may occur during data collection and analysis. Through extensive collaboration across projects, Bridge2AI researchers will create guidance and standards for the development of ethically sourced, state-of-the-art, AI-ready data sets that have the potential to help solve some of the most pressing challenges in human health — such as uncovering how genetic, behavioral, and environmental factors influence a person’s physical condition throughout their life.

“Generating high-quality ethically sourced data sets is crucial for enabling the use of next-generation AI technologies that transform how we do research,” said Lawrence A. Tabak, D.D.S., Ph.D., Performing the Duties of the Director of NIH. “The solutions to long-standing challenges in human health are at our fingertips, and now is the time to connect researchers and AI technologies to tackle our most difficult research questions and ultimately help improve human health.”

AI is both a field of science and a set of technologies that enable computers to mimic how humans sense, learn, reason, and take action. Although AI is already used in biomedical research and healthcare, its widespread adoption has been limited in part due to challenges of applying AI technologies to diverse data types. This is because routinely collected biomedical and behavioral data sets are often insufficient, meaning they lack important contextual information about the data type, collection conditions, or other parameters. Without this information, AI technologies cannot accurately analyze and interpret data. AI technologies may also inadvertently incorporate bias or inequities unless careful attention is paid to the social and ethical contexts in which the data is collected.

Social media users may trust artificial intelligence (AI) as much as human editors to flag hate speech and harmful content, according to researchers at Penn State.

The researchers said that when users think about positive attributes of machines, like their accuracy and objectivity, they show more faith in AI. However, if users are reminded about the inability of machines to make subjective decisions, their trust is lower.

The findings may help developers design better AI-powered content curation systems that can handle the large amounts of information currently being generated while avoiding the perception that the material has been censored, or inaccurately classified, said S. Shyam Sundar, James P. Jimirro Professor of Media Effects in the Donald P. Bellisario College of Communications and co-director of the Media Effects Research Laboratory.