Toggle light / dark theme

Text-to-image AI exploded this year as technical advances greatly enhanced the fidelity of art that AI systems could create. Controversial as systems like Stable Diffusion and OpenAI’s DALL-E 2 are, platforms including DeviantArt and Canva have adopted them to power creative tools, personalize branding and even ideate new products.

But the tech at the heart of these systems is capable of far more than generating art. Called diffusion, it’s being used by some intrepid research groups to produce music, synthesize DNA sequences and even discover new drugs.

So what is diffusion, exactly, and why is it such a massive leap over the previous state of the art? As the year winds down, it’s worth taking a look at diffusion’s origins and how it advanced over time to become the influential force that it is today. Diffusion’s story isn’t over — refinements on the techniques arrive with each passing month — but the last year or two especially brought remarkable progress.

Artificial Intelligence has seen many advances recently, with new technologies like deepfakes, deepvoice, and GPT3 completely changing how we see the world. These new technologies bring forth many obvious benefits for in workflow and entertainment, but when technology like this exists, there are those who will try and use it for evil. Today we will be taking a look at how AI is giving hackers and cyber criminals more ways to pull off heists focusing on the story of a $35 million dollar hack that was pulled off using artificial intelligence and deep voice software.

0:00 The History of Social Engineering.
1:12 Early Social Engineering Attacks.
5:02 How Hackers are using Artificial Intelligence.
7:37 The $35 Million Heist.

Join as a member to help support the channel and get perks!
https://www.youtube.com/channel/UCHoRvL_JN1_pqRmMlgfUVsQ/join.

Join the community discord:
https://discord.gg/hYNQN45MdN

Subscribe here:

All music from Epidemic Sound.

Support us! https://www.patreon.com/mlst.

If you don’t like the background music, we published a version with it all removed here — https://anchor.fm/machinelearningstreettalk/episodes/Music-R…on-e1sf1l7

David Chalmers is a professor of philosophy and neural science at New York University, and an honorary professor of philosophy at the Australian National University. He is the co-director of the Center for Mind, Brain, and Consciousness, as well as the PhilPapers Foundation. His research focuses on the philosophy of mind, especially consciousness, and its connection to fields such as cognitive science, physics, and technology. He also investigates areas such as the philosophy of language, metaphysics, and epistemology. With his impressive breadth of knowledge and experience, David Chalmers is a leader in the philosophical community.

The central challenge for consciousness studies is to explain how something immaterial, subjective, and personal can arise out of something material, objective, and impersonal. This is illustrated by the example of a bat, whose sensory experience is much different from ours, making it difficult to imagine what it’s like to be one. Thomas Nagel’s “inconceivability argument” has its advantages and disadvantages, but ultimately it is impossible to solve the mind-body problem due to the subjective nature of experience. This is further explored by examining the concept of philosophical zombies, which are physically and behaviorally indistinguishable from conscious humans yet lack conscious experience. This has implications for the Hard Problem of Consciousness, which is the attempt to explain how mental states are linked to neurophysiological activity. The Chinese Room Argument is used as a thought experiment to explain why physicality may be insufficient to be the source of the subjective, coherent experience we call consciousness. Despite much debate, the Hard Problem of Consciousness remains unsolved. Chalmers has been working on a functional approach to decide whether large language models are, or could be conscious.

Filmed at #neurips22

Discord: https://discord.gg/aNPkGUQtc5

The Future is Upon Us.
DreamFusion: https://bit.ly/3UQWIjh.
Nvidia Get3D: https://bit.ly/3dU9KMa.
Dream Textures A.I: Seamless Texture Creator.


Explore Sketch to 3D [Monster Mash]: https://youtu.be/YLGWsMfAc50
See Nvidia Nerf: https://youtu.be/Sp03EJCTrTI

Join Weekly Newsletter: https://bit.ly/3lpfvSm.
Find BLENDER Addons: https://bit.ly/3jbu8s7
Learn to Animate in Blender: https://bit.ly/3A1NWac.
See FiberShop — Realtime Hair Tool: https://tinyurl.com/2hd2t5v.
GET Character Creator 4 — https://bit.ly/3b16Wcw.
GET AXYZ ANIMA: https://bit.ly/2GyXz73
GET ICLONE 8 — https://bit.ly/38QDfbb.
Check Out Unity3D Bundles: https://bit.ly/384jRuy.
████████████████████████████
DISCORD: https://discord.gg/G2kmTjUFGm.
Twitter: https://bit.ly/3a0tADG
Music Platform: https://tinyurl.com/v7r8tc6j.
Patreon: https://www.patreon.com/asknk.
████████████████████████████
GET HEADSHOT CC: https://bit.ly/2XpspUw.
GET SKIN GEN: https://bit.ly/2L8m3G2
ICLONE UNREAL LIVE LINK: https://bit.ly/3hXBD3N
GET ACTORCORE: https://bit.ly/3adV9XK

███ BLENDER ADDONS & TUTORIALS ███

Get Free 3D Content Here: https://tinyurl.com/bdh28tb5

#ai.
#b3d.
#update.
#art.
#asknk.
#blender3d.
#blender3dart.
#3dart.
#assets.
#free.
#free3d.
#3D.

00:00 Intro.

On Thursday, a pair of tech hobbyists released Riffusion, an AI model that generates music from text prompts by creating a visual representation of sound and converting it to audio for playback. It uses a fine-tuned version of the Stable Diffusion 1.5 image synthesis model, applying visual latent diffusion to sound processing in a novel way.

Since a sonogram is a type of picture, Stable Diffusion can process it. Forsgren and Martiros trained a custom Stable Diffusion model with example sonograms linked to descriptions of the sounds or musical genres they represented. With that knowledge, Riffusion can generate new music on the fly based on text prompts that describe the type of music or sound you want to hear, such as “jazz,” “rock,” or even typing on a keyboard.

https://www.riffusion.com.
Get The Memo: https://lifearchitect.ai/memo.

Examples:
https://www.riffusion.com/?&prompt=bach+on+electone.
https://www.riffusion.com/?&prompt=eminem+style+anger+rap.

Code: https://github.com/hmartiro/riffusion-app.

Spectrogram demo: https://musiclab.chromeexperiments.com/Spectrogram.

0:00 Start.
2:38 Spectrogram demo.
5:48 Riffusion demo.

Inside language models (from GPT to Nova)

The Expanse is one of the seminal sci-fi shows of the past decade. Set centuries in the future when humans have colonized the solar system, it’s been called one of the most scientifically accurate sci-fi shows of all time. But just how much does this hold up to scrutiny?

Join this channel to get access to perks:
https://www.youtube.com/channel/UCF5F2zbc6NhJVyEiZGNyePQ/join.

Watch my video about the science of Star Trek’s phasers: https://www.youtube.com/watch?v=i0unFPbKrks.

Watch my video about the science of Star Wars’ lightsabers: https://www.youtube.com/watch?v=O5a7lHh9EpI

Written, directed, & edited by OrangeRiver.
Cinematography by PhobiaSoft https://www.youtube.com/phobiasoft.
Additional photography by OrangeRiver.

- Music in this video

My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.

When I tried the new viral AI avatar app Lensa, I was hoping to get results similar to some of my colleagues at MIT Technology Review. The digital retouching app was first launched in 2018 but has recently become wildly popular thanks to the addition of Magic Avatars, an AI-powered feature which generates digital portraits of people based on their selfies.

But while Lensa generated realistic yet flattering avatars for them—think astronauts, fierce warriors, and cool cover photos for electronic music albums— I got tons of nudes.

Musicians, we have some bad news. AI-powered music generators are here — and it looks like they’re gunning for a strong position in the content-creation industry.

“From streamers to filmmakers to app builders,” claims music generating app Mubert AI, which can transform limited text inputs into a believable-sounding composition, “we’ve made it easier than ever for content creators of all kinds to license custom, high-quality, royalty-free music.”

Of course, computer-generated music has been around for quite some time, making use of various forms of artificial intelligence to come up with results that can sound equally manmade and alien.