Toggle light / dark theme

New Stable Diffusion 2.0 improves jaw-dropping capability for generating AI images

The new AI art software brings “brand new possibilities for creative applications.”

London and San Francisco-based Stability AI, the company that developed Stable Diffusion, an image-generating open-source AI software, has just announced the release of Stable Diffusion 2.0, as per a press statement on the company’s website.

What is Stable Diffusion?


Stability AI

The company’s new open-source offering provides new features and improvements over the 1.0 release, including new text-to-image models trained on a new encoder called OpenCLIP that improves the quality of the generated images.

Machine learning tools autonomously classify 1,000 supernovae

Astronomers at Caltech have used a machine learning algorithm to classify 1,000 supernovae completely autonomously. The algorithm was applied to data captured by the Zwicky Transient Facility, or ZTF, a sky survey instrument based at Caltech’s Palomar Observatory.

“We needed a helping hand, and we knew that once we trained our computers to do the job, they would take a big load off our backs,” says Christoffer Fremling, a staff at Caltech and the mastermind behind the , dubbed SNIascore. “SNIascore classified its first supernova in April 2021, and, a year and a half later, we are hitting a nice milestone of 1,000 supernovae.”

ZTF scans the night skies every night to look for changes called transient events. This includes everything from moving asteroids to black holes that have just eaten stars to exploding stars known as supernovae. ZTF sends out hundreds of thousands of alerts a night to around the world, notifying them of these transient events. The astronomers then use other telescopes to follow up and investigate the nature of the changing objects. So far, ZTF data have led to the discovery of thousands of supernovae.

Why Scientists are Giving Robots Human Muscles

Human-robot hybrids are advancing quickly, but the applications aren’t just for complete synthetic humans. There’s a lot we can learn about ourselves in the process.

Hosted by: Hank Green.

SciShow has a spinoff podcast! It’s called SciShow Tangents. Check it out at http://www.scishowtangents.org.
———
Head to https://scishowfinds.com/ for hand selected artifacts of the universe!
———
Support SciShow by becoming a patron on Patreon: https://www.patreon.com/scishow.
———
Dooblydoo thanks go to the following Patreon supporters: Lazarus G, Sam Lutfi, D.A. Noe, سلطان الخليفي, Piya Shedden, KatieMarie Magnone, Scott Satovsky Jr, Charles Southerland, Patrick D. Ashmore, Tim Curwick, charles george, Kevin Bealer, Chris Peters.
———
Looking for SciShow elsewhere on the internet?
Facebook: http://www.facebook.com/scishow.
Twitter: http://www.twitter.com/scishow.
Tumblr: http://scishow.tumblr.com.
Instagram: http://instagram.com/thescishow.
———
Sources:

http://robotics.sciencemag.org/content/3/18/eaat4440
https://www.iis.u-tokyo.ac.jp/en/news/2916/
http://www.stroke.org/we-can-help/survivors/stroke-recovery/…emiparesis.
http://brainfoundation.org.au/images/stories/applicant_essay…_Terry.pdf.
https://www.ncbi.nlm.nih.gov/pubmedhealth/PMHT0027058/
https://training.seer.cancer.gov/anatomy/muscular/structure.html.
https://biodesign.seas.harvard.edu/soft-robotics.
https://www.nature.com/articles/nature14543

Images:

https://commons.wikimedia.org/wiki/File: Repliee_Q2_face.jpg.

NEW Nvidia AI Turns Text To 3D Video Game Objects 8X Better Than Google | Game Design AI

Deep Learning AI Specialization: https://imp.i384100.net/GET-STARTED
Nvidia unveils its new artificial intelligence 3D model maker for game design uses text or photo input to output a 3D mesh and can also edit and adjust 3D models with text descriptions. New video style transfer from Nvidia uses CLIP to convert the style of 3D models and photos. New differential equation-based neural network machine learning AI from MIT solves brain dynamics.

AI News Timestamps:
0:00 Nvidia AI Turns Text To 3D Model Better Than Google.
2:03 Nvidia 3D Object Style Transfer AI
4:56 New Machine Learning AI From MIT

#nvidia #ai #3D

Over 1,000 songs with human-mimicking AI vocals have been released by Tencent Music in China. One of them has 100m streams

MBW’s Stat Of The Week is a series in which we highlight a single data point that deserves the attention of the global music industry. Stat Of the Week is supported by Cinq Music Group, a technology-driven record label, distribution, and rights management company. Continue to article…The use of artificial intelligence-created music just moved up a gear.

Building interactive agents in video game worlds

Human behaviour is remarkably complex. Even a simple request like, “Put the ball close to the box” still requires deep understanding of situated intent and language. The meaning of a word like ‘close’ can be difficult to pin down – placing the ball inside the box might technically be the closest, but it’s likely the speaker wants the ball placed next to the box. For a person to correctly act on the request, they must be able to understand and judge the situation and surrounding context.

Most artificial intelligence (AI) researchers now believe that writing computer code which can capture the nuances of situated interactions is impossible. Alternatively, modern machine learning (ML) researchers have focused on learning about these types of interactions from data. To explore these learning-based approaches and quickly build agents that can make sense of human instructions and safely perform actions in open-ended conditions, we created a research framework within a video game environment.

Today, we’re publishing a paper and collection of videos, showing our early steps in building video game AIs that can understand fuzzy human concepts – and therefore, can begin to interact with people on their own terms.

Building NeuroTech Minimally Invasive Human Machine Interfaces | Dr. Connor Glass

Neuralink’s invasive brain implant vs phantom neuro’s minimally invasive muscle implant. Deep dive on brain computer interfaces, Phantom Neuro, and the future of repairing missing functions.

Connor glass.
Phantom is creating a human-machine interfacing system for lifelike control of technology. We are currently hiring skilled and forward-thinking electrical, mechanical, UI, AR/VR, and Ai/ML engineers. Looking to get in touch with us? Send us an email at [email protected].

Phantom Neuro.
Phantom is a neurotechnology company, spun out of the lab at The Johns Hopkins University School of Medicine, that is enabling lifelike control of robotic orthopedic technologies, such as prosthetic limbs and exoskeletons. Phantom’s solution, the Phantom X, consists of low-risk implantable sensors, AI, and enabling software. By providing superior control of robotic orthopedic mechanisms, the Phantom X will drastically improve the lives of individuals with limb difference who have yet to see a tangible improvement in quality of life despite significant advancements in the field of robotics.

Links:
[email protected].
https://www.linkedin.com/in/connor-glass-md-010124141/
https://www.linkedin.com/company/phantomneuro/

Home 2024


https://twitter.com/phantom_neuro.

PODCAST INFO:
The Learning With Lowell show is a series for the everyday mammal. In this show we’ll learn about leadership, science, and people building their change into the world. The goal is to dig deeply into people who most of us wouldn’t normally ever get to hear. The Host of the show – Lowell Thompson-is a lifelong autodidact, serial problem solver, and founder of startups.
LINKS
Youtube: https://www.youtube.com/channel/UCzri06unR-lMXbl6sqWP_-Q
Youtube clips: https://www.youtube.com/channel/UC-B5x371AzTGgK-_q3U_KfA
Linkedin: https://www.linkedin.com/in/lowell-thompson-2227b074
Twitter: https://twitter.com/LWThompson5
Website: https://www.learningwithlowell.com/
Podcast email: [email protected].

Timestamps.

Nano-robot antibodies that fight cancer enter first human drug trial

Scientists in Israel have created the first nano-robot antibodies designed to fight cancer. The first human trial for the new nano-robots will start soon, and it will determine just how effective the antibodies are. What is special about these particular antibodies, too, is that they are programmed to decide whether cells surrounding tumors are “bad” or “good.”

The trial is currently underway in Australia and if it goes according to plan, the nano-robot antibodies will be able to fight cells around tumors that can help the tumor while also boosting the capability of the cells inhibiting the growth of the cancerous cells. The antibodies were invented by Professor Yanay Ofran and are based on human and animal antibodies.

The goal of these nano-robot antibodies is to unlock the full potential that antibodies offer, Ofran says. Currently, the use of antibodies in medicine only utilizes a fraction of the capabilities offered by these natural disease fighters. As such, finding a way to maximize their capability has been a long-term goal for quite a while.