Toggle light / dark theme

Ammaar Reshi was playing around with ChatGPT, an AI-powered chatbot from OpenAI when he started thinking about the ways artificial intelligence could be used to make a simple children’s book to give to his friends. Just a couple of days later, he published a 12-page picture book, printed it, and started selling it on Amazon without ever picking up a pen and paper.

The feat, which Reshi publicized in a viral Twitter thread, is a testament to the incredible advances in AI-powered tools like ChatGPT—which took the internet by storm two weeks ago with its uncanny ability to mimic human thought and writing. But the book, Alice and Sparkle, also renewed a fierce debate about the ethics of AI-generated art. Many argued that the technology preys on artists and other creatives—using their hard work as source material, while raising the specter of replacing them.

Deep Learning AI Specialization: https://imp.i384100.net/GET-STARTED
GPT-4 is the next large language model from OpenAI after GPT-3 and ChatGPT, and it’s expected to use 100 trillion parameters while accepting multi-modal inputs including audio, text, and video. Researchers have created a soft robotics device that can heal itself after being wounded and continue moving. New memristor deep learning system reduces power for AI training by 100 thousand times.

AI News Timestamps:
0:00 OpenAI GPT-4 Size.
1:18 GPT-4 AI Model Sparsity.
2:06 OpenAI Going For Multimodal.
3:15 OpenAI’s Cost of Training.
4:32 New Self Healing Soft Robotics.
6:04 New Memristor Deep Learning System.

#technology #tech #ai

Self-supervised learning is a form of unsupervised learning in which the supervised learning task is constructed from raw, unlabeled data. Supervised learning is effective but usually requires a large amount of labeled data. Getting high-quality labeled data is time-consuming and resource-intensive, especially for sophisticated tasks like object detection and instance segmentation, where more in-depth annotations are sought.

Self-supervised learning aims to first learn usable representations of the data from an unlabeled pool of data by self-supervision and then to refine these representations with few labels for the supervised downstream tasks such as image classification, semantic segmentation, etc.

Self-supervised learning is at the heart of many recent advances in artificial intelligence. However, existing algorithms focus on a particular modality (such as images or text) and a high computer resource requirement. Humans, on the other hand, appear to learn significantly more efficiently than existing AI and to learn from diverse types of information consistently rather than requiring distinct learning systems for text, speech, and other modalities.

DALL-E 2 transformed the world of art in 2022.

DALL-E is a system that has been around for years, but its successor, DALL-E 2, was launched this year.


Ibrahim Can/Interesting Engineering.

DALL-E and DALL-E 2 are machine-learning models created by OpenAI to produce images from language descriptions. These text-to-image descriptions are known as prompts. The system could generate realistic images just from a description of the scene. DALL-E is a neural network algorithm that creates accurate pictures from short phrases provided by the user. It comprehends language through textual descriptions and from “learning” information provided in its datasets by users and developers.

This technology will not only extend the lifespan of concrete structures, but also promote a circular economy.

Sewer pipe corrosion, or crown corrosion, occurs when sewage pipe material comes into contact with sulphuric acid. The aging pipe material corrodes, and the pipes crack. Over the past few years, engineers have developed sewer bots to inspect sewage pipes and go to places unsafe for humans.

Professor Yan Zhuge, an engineering expert at the University of South Australia, is trialing a novel solution.


Vladimir Zapletin/iStock.

But that also means the robots would have to go to places where existing wireless communications cannot reach them. Hurdles are aplenty.

Tweaks to the system have fine-tuned images of spectrograms.

Stable Diffusion has been tweaked to include an update to its AI routines to include a fine-tuning of the images of spectrograms that are paired to text. Now they are able to generate more precise sounds. The team calls their version of the stable diffusion model, Riffusion.

All the Stable Diffusion features remain.


Merovingian/iStock.

There is audio processing, also but that happens later in the cycle or downstream of the model.

The study shows that machine-learning models can help predict COVID-19 infections.

Researchers have discovered a new way to predict which features are most useful in determining test results for COVID-19. The research team, from Florida Atlantic University’s (FAU) College of Engineering and Computer Science in the U.S., used AI to predict positive or negative COVID-19 test results.

The most common techniques currently used to detect COVID-19 are blood tests, also called serology tests, and molecular tests. Since the two assessments use different methods, they vary substantially.

What if artificial intelligence could help an aspiring author write a novel? Or coach people to improve the quality of their writing? Could machines learn how to make jokes? Inspired by these questions, computer scientist Jiao Sun has been exploring the potential of AI-generated text as a PhD candidate at the University of Southern California (USC).

After a four-month internship at Alexa AI last spring, she is now starting her journey as an Amazon Machine Learning Fellow for the 2022–23 academic year and hopes to continue developing text-generation models that enhance the interaction between humans and AI.

While Sun is passionate about the potential of natural language generation, she also believes it’s important to develop tools that improve human control over machine-created content. She is also cautiously optimistic about the surge in popularity surrounding text generation models.