Toggle light / dark theme

OpenAI, which released the viral ChatGPT chatbot last year, unveiled a tool that’s intended to help show if text has been authored by an artificial intelligence program and passed off as human.

The tool will flag content written by OpenAI’s products as well as other AI authoring software. However, the company said “it still has a number of limitations — so it should be used as a complement to other methods of determining the source of text instead of being the primary decision-making tool.”

We’re launching a classifier trained to distinguish between AI-written and human-written text.

We’ve trained a classifier to distinguish between text written by a human and text written by AIs from a variety of providers. While it is impossible to reliably detect all AI-written text, we believe good classifiers can inform mitigations for false claims that AI-generated text was written by a human: for example, running automated misinformation campaigns, using AI tools for academic dishonesty, and positioning an AI chatbot as a human.

Our classifier is not fully reliable. In our evaluations on a “challenge set” of English texts, our classifier correctly identifies 26% of AI-written text (true positives) as “likely AI-written,” while incorrectly labeling human-written text as AI-written 9% of the time (false positives). Our classifier’s reliability typically improves as the length of the input text increases. Compared to our previously released classifier, this new classifier is significantly more reliable on text from more recent AI systems.

Stanford researchers created DetectGPT, a tool to help teachers and others identify content generated by ChatGPT and similar large language models (LLMs).

ChatGPT has been a topic of concern since the introduction of the chatbot. OpenAI, the maker of ChatGPT, after the controversy, confirmed that they are working on a tool to detect ChatGPT-generated content.

DetectGPT is a tool designed to detect ChatGPT content and other similar tools. Researchers found that text generated by ChatGPT LLMs “occupy negative curvature areas of the model’s log probability function.”

It was one of the teachers of the school’s engineering program who suggested that their classmates could help him by developing a robotic hand.

15-year-old Tennessee boy Sergio Peralta now has a robotic hand, thanks to his classmates. A group of high school students designed a robotic hand for their newcomer.

“In the first days of school, I honestly felt like hiding my hand,” he said to CBS News.


Kool99/iStock.

By now you have probably heard about ChatGPT (and used it!). Even to those familiar with AI tools, ChatGPT generated a wow moment. Perhaps it is the sheer breadth of possible applications, the accessibility, or the ease of use. In any event, people are scrambling to figure out what it means for them and their businesses. There appear to be three main reactions to date — ignore, ban (with or without detection), or embrace. While completely understandable as short-term reactions, the first two are not intermediate or long-term practical. The technology is too powerful, too easy to use, and too helpful to too many people.


What does ChatGPT mean for how we consider human intelligence? The key to benefiting from these technologies is to better understand what humans are capable of.

According to the Financial Times, investments in generative AI in 2022 exceeded $2 billion. OpenAI’s valuation for a potential sale of some shares was set at an impressive $29 billion by the Wall Street Journal. Clearly, this indicates the enormity of interest from investors and corporations in generative AI technology. As the world continues to embrace technology and automation, businesses are beginning to explore the infinite possibilities of Generative AI. This type of Artificial Intelligence is on the cusp of creating autonomous, self-sustaining digital-only enterprises that can interact with humans without the active need for human interaction.

Generative AI is quickly becoming more widely adopted as enterprises are beginning to utilize it for a variety of tasks, including marketing, customer service, sales, learning and client relationships. This type of AI can create marketing content, generate pitch documents and product ideas and craft sophisticated advertising campaigns – all custom driven to help improve conversion rates and drive more revenue.

Generative AI companies are beginning to see massive success in venture capital, with many raising large sums of money and achieving high valuations. As per TechCrunch, Jasper, a copywriter assistant, recently raised $125 million at a $1.5 billion valuation, while Hugging Face raised $100 million at a $2 billion valuation, and Stability AI raised $101 million at a $1 billion valuation. In addition, Inflection AI raised $225 million at a post-money valuation of $1 billion according to TechCrunch. These successes can be compared to OpenAI, who in 2019, received more than $1 billion from Microsoft in funding with a $25 billion valuation.

Although not yet released due to copyright issues.

Google researchers say MusicLM is based on a model generating high-fidelity music from text descriptions such as “a calming violin melody backed by a distorted guitar riff”. You can find the details on GitHub.


This article discusses the risks of Generative AI in the Music Industry and puts a spotlight on Google, MusicLM developments and encourages leaders in the music industry to think harder about the future of their industry.

My last article focused on the recent announcement of Google’s MusicLM, although not accessible to the public, due to copyright issues, it does give one new insights that AI is disrupting the value of human talent in the musical field.

Music has been core to humankind for centuries with the first piece of music, a Hurrian Hymn, discovered in the 1950s on a clay tablet inscribed in Cuneiform text. It’s the oldest surviving melody and is over 3,400 years old. Songs are human’s way of communicating stories and encompassing everything we know of as humans.


This article continues to explore the impact of AI on the music industry and looks at some of the pros and the cons, reinforcing the need for increased legal frameworks and copyright protections for musicians.