Toggle light / dark theme

China ruled on a case of infringement of copyright by an AI-generated service, the first effective ruling of its kind globally, which provided a judicial answer to the dilemma of whether the content generated by AI service providers infringes on copyright, media reported on Monday.

According to the 21st Century Business Herald, the Guangzhou Internet Court ruled that the an AI company had infringed the plaintiff’s copyright and adaptation rights to the Ultraman works in the process of providing generative AI services, and should bear relevant civil liability.

The protagonist of this case was the super IP Ultraman. In this case, the copyright owner of the “Ultraman” works exclusively authorized the copyright of the series images to the plaintiff, while the defendant company operated a website, providing services with AI conversation and AI-generated painting functions.

Chayka argues that cultivating our own personal taste is important, not because one form of culture is demonstrably better than another, but because that slow and deliberate process is part of how we develop our own identity and sense of self. Take that away, and you really do become the person the algorithm thinks you are.

As Chayka points out in Filterworld, algorithms “can feel like a force that only began to exist … in the era of social networks” when in fact they have “a history and legacy that has slowly formed over centuries, long before the Internet existed.” So how exactly did we arrive at this moment of algorithmic omnipresence? How did these recommendation machines come to dominate and shape nearly every aspect of our online and (increasingly) our offline lives? Even more important, how did we ourselves become the data that fuels them?

These are some of the questions Chris Wiggins and Matthew L. Jones set out to answer in How Data Happened: A History from the Age of Reason to the Age of Algorithms. Wiggins is a professor of applied mathematics and systems biology at Columbia University. He’s also the New York Times’ chief data scientist. Jones is now a professor of history at Princeton. Until recently, they both taught an undergrad course at Columbia, which served as the basis for the book.

For better or for worse, generative AIs continue to evolve by leaps and bounds with each passing day, a fact that has once again been proven by Google’s DeepMind team with the reveal of Genie, a new AI-powered model capable of creating entire games from just a single image prompt. Trained without any action labels on a large dataset of publicly available Internet videos, Genie can turn any image, whether it’s a real-world photograph, a sketch, an AI-generated image, or a painting, into a simplistic 2D platformer, with the team noting that this approach is versatile and applicable across various domains. Moreover, developers highlight that this new model opens the door for future AI agents to be trained “in a never-ending curriculum of new, generated worlds.”

A Massachusetts Institute of Technology (MIT) student has created a device that allows humans to communicate with machines using our minds — and it truly is incredible.

Arnav Kapur created a device called AlterEgo, which is a wearable type of headset that allows users to communicate with technology without even speaking a word.

So how does it work?