Toggle light / dark theme

If you wanted to, you could access an “evil” version of OpenAI’s ChatGPT today—though it’s going to cost you. It also might not necessarily be legal depending on where you live.

However, getting access is a bit tricky. You’ll have to find the right web forums with the right users. One of those users might have a post marketing a private and powerful large language model (LLM). You’ll connect with them on an encrypted messaging service like Telegram where they’ll ask you for a few hundred dollars in cryptocurrency in exchange for the LLM.

Once you have access to it, though, you’ll be able to use it for all the things that ChatGPT or Google’s Bard prohibits you from doing: have conversations about any illicit or ethically dubious topic under the sun, learn how to cook meth or create pipe bombs, or even use it to fuel a cybercriminal enterprise by way of phishing schemes.

A new proposal spells out the very specific ways companies should evaluate AI security and enforce censorship in AI models.

Ever since the Chinese government passed a law on generative AI back in July, I’ve been wondering how exactly China’s censorship machine would adapt for the AI era.

Last week we got some clarity about what all this may look like in practice.

Conservation laws are central to our understanding of the universe, and now scientists have expanded our understanding of these laws in quantum mechanics.

A conservation law in physics describes the preservation of certain quantities or properties in isolated physical systems over time, such as mass-energy, momentum, and electric charge.

Conservation laws are fundamental to our understanding of the universe because they define the processes that can or cannot occur in nature. For example, the conservation of momentum reveals that within a closed system, the sum of all momenta remains unchanged before and after an event, such as a collision.

There is evidence that some form of conscious experience is present by birth, and perhaps even in late pregnancy, an international team of researchers from Trinity College Dublin and colleagues in Australia, Germany and the U.S. has found.

The findings, published today in Trends in Cognitive Science, have important clinical, ethical and potentially , according to the authors.

In the study, titled “Consciousness in the cradle: on the emergence of infant experience,” the researchers argue that by birth the infant’s developing brain is capable of conscious experiences that can make a lasting imprint on their developing sense of self and understanding of their environment.

Over the past few years, we have taken a gigantic leap forward in our decades-long quest to build intelligent machines: the advent of the large language model, or LLM.

This technology, based on research that tries to model the human brain, has led to a new field known as generative AI — software that can create plausible and sophisticated text, images and computer code at a level that mimics human ability.

Businesses around the world have begun to experiment with the new technology in the belief it could transform media, finance, law and professional services, as well as public services such as education. The LLM is underpinned by a scientific development known as the transformer model, made by Google researchers in 2017.

Now that computer-generated imaging is accessible to anyone with a weird idea and an internet connection, the creation of “AI art” is raising questions—and lawsuits. The key questions seem to be 1) how does it actually work, 2) what work can it replace, and 3) how can the labor of artists be respected through this change?

The lawsuits over AI turn, in large part, on copyright. These copyright issues are so complex that we’ve devoted a whole, separate post to them. Here, we focus on thornier non-legal issues.

How Do AI Art Generators Work?