Toggle light / dark theme

This video was creating using multiple AI tools. Script was generated using ChatGPT, the noration voice was generated with Elevenlabs.io, background audio was generated with AudioLDM model and finally images were created with Stable Diffusion using Illuminati Diffusion v1.1 model. The script itself was a source for prompts at image generation stage.

There were still some human input. Particularly I generated several images for each part of the script and choose the most appealing ones. I did also manually combine noration with background music. But mostly it was done in a way that each part of the process might be completely automated.

You can follow me here to see more of my work:
Twitter: https://twitter.com/volotat.
Github: https://github.com/volotat.
Medium: https://medium.com/@AlexeyBorsky

Conor russomanno, founder and CEO of openbci eva esteban, embedded software engineer at openbci

Galea is an award-winning platform that merges next-generation biometrics with mixed reality. It is the first device to integrate a wide range of physiological signals, including EEG, EMG, EDA, PPG, and eye-tracking, into a single headset. In this session, Conor and Eva will provide a live demonstration of the device and its capabilities, showcasing its potential for a variety of applications, from gaming to training and rehabilitation. They will give an overview of the different hardware and software components of the system, highlighting how it can be used to analyze user experiences in real time. Attendees will get an opportunity to ask questions at the end.

John Danaher, Senior Lecturer in Law at the National University of Ireland (NUI) Galway:

“Understanding Techno-Moral Revolutions”

Talk held on August 24, 2021 for Colloquium of the Center for Humans and Machines at the Max Planck Institute for Human Development, Berlin.

It is common to use ethical norms and standards to critically evaluate and regulate the development and use of emerging technologies like AI and Robotics. Indeed, the past few years has seen something of an explosion of interest in the ethical scrutiny of technology. What this emerging field of machine ethics tends to overlook, however, is the potential to use the development of novel technologies to critically evaluate our existing ethical norms and standards. History teaches us that social morality (the set of moral beliefs and practices shared within a given society) changes over time. Technology has sometimes played a crucial role in facilitating these historical moral revolutions. How will it do so in the future? Can we provide any meaningful answers to this question? This talk will argue that we can and will outline several tools for thinking about the mechanics of technologically-mediated moral revolutions.

Not peer-reviewed yet but a submitted paper.

The ‘presented images’ were shown to a group of humans. The ‘reconstructed images’ were the result of an fMRI output to Stable Diffusion.

In other words, #stablediffusion literally read people’s minds.

Source 👇

Call it naive, call it crazy, but I think we have a real chance to tackle aging in this century. And though it’s not easy — it’s very simple.

If you have seen the banner of this channel — it says it’s all. But in this video I go deeper into my personal story and motivation. This way I hope you can understand why I’m doing what I’m doing.

So pick your role and let’s work!
Worse case scenario — we’ll live for extra 20 healthy years. Best case… well, well we might stop or reverse aging all together.

Requirements to cure aging:

► Skip the waitlist by signing up for Masterworks here: https://masterworks.art/ainews.
Purchase shares in great masterpieces from artists like Pablo Picasso, Banksy, Andy Warhol, and more. See important Masterworks disclosures: https://www.masterworks.com/about/disclaimer?utm_source=aine…disclaimer.

Premium Robots: https://taimine.com/
Deep Learning AI Specialization: https://imp.i384100.net/GET-STARTED

GenAug has been developed by Meta AI and the University of Washington, which utilizes pre-trained text-to-image generative artificial intelligence models to enable imitation-based learning in practical robots. Stanford artificial intelligence researchers have proposed a method, called ATCON to drastically improve the quality of attention maps and classification performance on unseen data. Google’s new SingSong AI can generate instrumental music that complements your singing.

AI News Timestamps:

What if an AI could interpret your imagination, turning images in your mind’s eye into reality? While that sounds like a detail in a cyberpunk novel, researchers have now accomplished exactly this, according to a recently-published paper.

Researchers found that they could reconstruct high-resolution and highly accurate images from brain activity by using the popular Stable Diffusion image generation model, as outlined in a paper published in December. The authors wrote that unlike previous studies, they didn’t need to train or fine-tune the AI models to create these images.

The researchers—from the Graduate School of Frontier Biosciences at Osaka University—said that they first predicted a latent representation, which is a model of the image’s data, from fMRI signals. Then, the model was processed and noise was added to it through the diffusion process. Finally, the researchers decoded text representations from fMRI signals within the higher visual cortex and used them as input to produce a final constructed image.