Toggle light / dark theme

For example, the New York Times states: “The AI industry this year is set to be defined by one main characteristic: A remarkably rapid improvement of the technology as advancements build upon one another, enabling AI to generate new kinds of media, mimic human reasoning in new ways and seep into the physical world through a new breed of robot.”

Ethan Mollick, writing in his One Useful Thing blog, takes a similar view: “Most likely, AI development is actually going to accelerate for a while yet before it eventually slows down due to technical or economic or legal limits.”

The year ahead in AI will undoubtedly bring dramatic changes. Hopefully, these will include advances that improve our quality of life, such as the discovery of life saving new drugs. Likely, the most optimistic promises will not be realized in 2024, leading to some amount of pullback in market expectations. This is the nature of hype cycles. Hopefully, any such disappointments will not bring about another AI winter.

AI tools like ChatGPT can draft letters, tell jokes and even give legal advice – but only in the form of computerized text.

Now, scientists have created an AI that can imitate human handwriting, which could herald fresh issues regarding fraud and fake documents.

Amazingly, the results are almost indistinguishable from the real thing drafted by human hands.

Companies like OpenAI and Midjourney have opened Pandora’s box, opening them up to considerable legal trouble by training their chatbots on the vastness of the internet while largely turning a blind eye to copyright.

As professor and author Gary Marcus and film industry concept artist Reid Southen, who has worked on several major films for the likes of Marvel and Warner Brothers, argue in a recent piece for IEEE Spectrum, tools like DALL-E 3 and Midjourney could land both companies in a “copyright minefield.”

It’s a heated debate that’s reaching fever pitch. The news comes after the New York Times sued Microsoft and OpenAI, alleging it was responsible for “billions of dollars” in damages by training ChatGPT and other large language models on its content without express permission. Well-known authors including “Game of Thrones” author George RR Martin and John Grisham recently made similar arguments in a separate copyright infringement case.

A new, potentially revolutionary artificial intelligence framework called “Blackout Diffusion” generates images from a completely empty picture, meaning that the machine-learning algorithm, unlike other generative diffusion models, does not require initiating a “random seed” to get started. Blackout Diffusion, presented at the recent International Conference on Machine Learning (“Blackout Diffusion: Generative Diffusion Models in Discrete-State Spaces”), generates samples that are comparable to the current diffusion models such as DALL-E or Midjourney, but require fewer computational resources than these models.

“Generative modeling is bringing in the next industrial revolution with its capability to assist many tasks, such as generation of software code, legal documents and even art,” said Javier Santos, an AI researcher at Los Alamos National Laboratory and co-author of Blackout Diffusion. “Generative modeling could be leveraged for making scientific discoveries, and our team’s work laid down the foundation and practical algorithms for applying generative diffusion modeling to scientific problems that are not continuous in nature.”

A new generative AI model can create images from a blank frame. (Image: Los Alamos National Laboratory)

The Times said OpenAI and Microsoft are advancing their technology through the “unlawful use of The Times’s work to create artificial intelligence products that compete with it” and “threatens The Times’s ability to provide that service”


The New York Times sued OpenAI and Microsoft for copyright infringement on Wednesday, opening a new front in the increasingly intense legal battle over the unauthorized use of published work to train artificial intelligence technologies.

The Times is the first major American media organization to sue the companies, the creators of ChatGPT and other popular A.I. platforms, over copyright issues associated with its written works. The lawsuit, filed in Federal District Court in Manhattan, contends that millions of articles published by The Times were used to train automated chatbots that now compete with the news outlet as a source of reliable information.

The suit does not include an exact monetary demand. But it says the defendants should be held responsible for “billions of dollars in statutory and actual damages” related to the “unlawful copying and use of The Times’s uniquely valuable works.” It also calls for the companies to destroy any chatbot models and training data that use copyrighted material from The Times.

Regulatory efforts to protect data are making strides globally. Patient data is protected by law in the United States and elsewhere. In Europe the General Data Protection Regulation (GDPR) guards personal data and recently led to a US $1.3 billion fine for Meta. You can even think of Apple’s App Store policies against data sharing as a kind of data-protection regulation.

“These are good constraints. These are constraints society wants,” says Michael Gao, founder and CEO of Fabric Cryptography, one of the startups developing FHE-accelerating chips. But privacy and confidentiality come at a cost: They can make it more difficult to track disease and do medical research, they potentially let some bad guys bank, and they can prevent the use of data needed to improve AI.

“Fully homomorphic encryption is an automated solution to get around legal and regulatory issues while still protecting privacy,” says Kurt Rohloff, CEO of Duality Technologies, in Hoboken, N.J., one of the companies developing FHE accelerator chips. His company’s FHE software is already helping financial firms check for fraud and preserving patient privacy in health care research.