Toggle light / dark theme

General Information — https://neane.ru/rus/3/member/dmitriy…Dmitriy

Dmitriy N’Elpin – arrangement, synthesizers, drums, vocals, vocals — vocoder, mastering.
Inesa Shaurouskaya – synthesizers, vocals — vocoder, special effects.
Andrey Grozovskiy – video editing.

Dmitriy N’Elpin – https://www.facebook.com/dmitriy.nelpin.
Inesa Shaurouskaya – https://www.facebook.com/inesa.shaurouskaya.
Andrey Grozovskiy – https://www.facebook.com/andrey.grozovskiy.
Fan club — https://www.facebook.com/groups/240334858090267/?ref=share&mibextid=l066kq.

Официальный сайт группы KRAFTWERK — https://kraftwerk.com/

This is a recording of the SingularityNET Ecosystem leaders meeting, which was recorded on Monday, October 31st, 2022, including updates on projects progress, exciting news, and discussions around key initiatives.

SingularityNET is a decentralized marketplace for artificial intelligence. We aim to create the world’s global brain with a full-stack AI solution powered by a decentralized protocol.

We gathered the leading minds in machine learning and blockchain to democratize access to AI technology. Now anyone can take advantage of a global network of AI algorithms, services, and agents.

Website: https://singularitynet.io.

This sounds a lot like cryptomining but it also doesn’t. Cryptomining has nothing to do with machine learning algorithms and, unlike machine learning, cryptomining’s only value is producing a highly speculative digital commodity called a token that some people think is worth something and so are willing to spend real money on it.

This gave rise to a cryptobubble that drove a shortage of GPUs over the past two years when cryptominers bought up all the Nvidia Ampere graphics cards from 2020 through 2022, leaving gamers out in the cold. That bubble has now popped, and GPU stock has now stabilized.

But with the rise of ChatGPT, are we about to see a repeat of the past two years? It’s unlikely, but it’s also not out of the question either.

Everyone is now scrambling to integrate AI with as many facets of human life as possible. Neural nets and machine learning can offer greatly improved processing speeds, yet these aspects still rely on digital pathways that may never fully mimic the biological structure of the human brain. The next step in AI improvement would be to combine the best of both the digital world and the biological world. Some scientists are already experimenting with this possibility, as a new article published in the academic journal Frontiers of Science is deep diving into the realm of biocomputers and organoid intelligence (OI).

All AI applications today rely on computing power provided by powerful CPUs or GPUs. OI, on the other hand, is seeking to bring “unprecedented advances in computing speed, processing power, data efficiency and storage capabilities” by harnessing the complexity of lab-grown cell-cultures repurposed from adult skin cells that consist of 3D clusters of neurons and other brain cells.

To coincide with the rollout of the ChatGPT API, OpenAI today launched the Whisper API, a hosted version of the open source Whisper speech-to-text model that the company released in September.

Priced at $0.006 per minute, Whisper is an automatic speech recognition system that OpenAI claims enables “robust” transcription in multiple languages as well as translation from those languages into English. It takes files in a variety of formats, including M4A, MP3, MP4, MPEG, MPGA, WAV and WEBM.

Countless organizations have developed highly capable speech recognition systems, which sit at the core of software and services from tech giants like Google, Amazon and Meta. But what makes Whisper different is that it was trained on 680,000 hours of multilingual and “multitask” data collected from the web, according to OpenAI president and chairman Greg Brockman, which lead to improved recognition of unique accents, background noise and technical jargon.