Toggle light / dark theme

The suit claims generative AI art tools violate copyright law by scraping artists’ work from the web without their consent.

A trio of artists have launched a lawsuit against Stability AI and Midjourney, creators of AI art generators Stable Diffusion and Midjourney, and artist portfolio platform DeviantArt, which recently created its own AI art generator, DreamUp.

The artists — Sarah Andersen, Kelly McKernan, and Karla Ortiz — allege that these organizations have infringed the rights of “millions of artists” by training their AI tools on five billion images scraped from the web “with­out the con­sent of the orig­i­nal artists.”


AI art gets its first major copyright lawsuit.

They say that actors ought to fully immerse themselves into their roles. Uta Hagen, acclaimed Tony Award-winning actress and a legendary acting teacher said this: “It’s not about losing yourself in the role, it’s about finding yourself in the role.”

In today’s column, I’m going to take you on a journey of looking at how the latest in Artificial Intelligence (AI) can be used for role-playing. This is not merely play-acting. Instead, people are opting to use a type of AI known as Generative AI including the social media headline-sparking AI app ChatGPT as a means of seeking self-growth via role-playing.


You might be wondering why I didn’t showcase a more alarming example of generative AI role-playing. I could do so, and you can readily find such examples online. For example, there are fantasy-style role-playing games that have the AI portray a magical character with amazing capabilities, all of which occur in written fluency on par with a human player. The AI in its role might for example try to (in the role-playing scenario) expunge the human player or might berate the human during the role-playing game.

My aim here was to illuminate the notion that role-playing doesn’t have to necessarily be the kind that clobbers someone over the head and announces itself to the world at large. There are subtle versions of role-playing that generative AI can undertake. Overall, whether the generative AI is full-on role-playing or performing in a restricted mode, the question still stands as to what kind of mental health impacts might this functionality portend. There are the good, the bad, and the ugly associated with generative AI and role-playing games.

In the first case of its kind, artificial intelligence (AI) will be present throughout an entire U.S. court proceeding, when it helps to defend against a speeding ticket.

San Francisco-based DoNotPay has developed “the world’s first robot lawyer” – an AI that can be installed on a mobile device. The company’s stated goal is to “level the playing field and make legal information and self-help accessible to everyone.”

The AI company has earlier created something similar earlier, they have in the past used AI-generated form letters and chatbots to help secure and recovers people’s fund for onboarding wifi that failed to work.

Many people have reacted to this new innovation citing that it may be injurious to lawyers’ legal business, particularly lawyers who have no knowledge about artificial intelligence.

The eerie new capabilities of artificial intelligence are about to show up inside a courtroom — in the form of an AI chatbot lawyer that will soon argue a case in traffic court.

That’s according to Joshua Browder, the founder of a consumer-empowerment startup who conceived of the scheme.

Sometime next month, Browder is planning to send a real defendant into a real court armed with a recording device and a set of earbuds. Browder’s company will feed audio of the proceedings into an AI that will in turn spit out legal arguments; the defendant, he says, has agreed to repeat verbatim the outputs of the chatbot to an unwitting judge.

The biggest obstacle is that each robotics lab has its own idea of what a conscious robot looks like. There are also moral implications to building robots that have consciousness. Will they have rights, like in Bicentennial Man?

Considerations about conscious robots have been the domain of science fiction for decades. Isaac Asimov wrote several novels, including I, Robot, that examined the implications from the perspectives of law, society, and family, raising a lot of moral questions. Experts in ethical technology have considered and expanded upon these questions as scientists like those in the Columbia University lab work toward building more intelligent machines.

Science fiction has also brought us killer machines like in The Terminator, and conscious robots sound like a good way to have some. Humans might learn bad ideas and act upon them, and there is no reason to believe that robots will not fall into the same trap. Some of science’s greatest minds have warned against getting carried away with artificial intelligence.

The US Supreme Court on Monday rejected a bid by NSO Group to block a WhatsApp lawsuit accusing the Israeli tech firm of allowing mass cyberespionage of journalists and human rights activists.

The Supreme Court denied NSO’s plea for legal immunity and ruled that the case, which targets the company’s Pegasus software, can continue in a California , a court filing showed.

Pegasus gives its government customers—which have allegedly included Mexico, Hungary, Morocco and India—near-complete access to a target’s device, including their personal data, photos, messages and location.

In this #webinar, Dr Vincenzo Sorrentino from the Department of Biochemistry and Healthy Longevity Translational Research Programme at the Yong Loo Lin School of Medicine, shared about his research on the relationship between metabolism, nutrition and proteostasis and their impact on health and ageing, and engaged in discussion about the role of mitochondrial proteostasis in ageing and related diseases.

Register for upcoming #HealthyLongevity #webinar sessions at https://nus-sg.zoom.us/webinar/register/7916395807744/WN__sypkX6ZSomc7cGAkK3LbA

#NUSMedicine #webinarseries.

References: