Artificial General Intelligence (AGI) has been the subject of science fiction for centuries, but it was only with the advent of modern computing power in the mid-20th century that serious research began to be conducted into its feasibility.
SPEAKING at the University of Cambridge in 1980, Stephen Hawking considered the possibility of a theory of everything that would unite general relativity and quantum mechanics – our two leading descriptions of reality – into one neat, all-encompassing equation. We would need some help, he reckoned, from computers. Then he made a provocative prediction about these machines’ growing abilities. “The end might not be in sight for theoretical physics,” said Hawking. “But it might be in sight for theoretical physicists.”
Artificial intelligence has achieved much since then, yet physicists have been slow to use it to search for new and deeper laws of nature. It isn’t that they fear for their jobs. Indeed, Hawking may have had his tongue firmly in his cheek. Rather, it is that the deep-learning algorithms behind AIs spit out answers that amount to a “what” rather than a “why”, which makes them about as useful for a theorist as saying the answer to the question of life, the universe and everything is 42.
The new AI art software brings “brand new possibilities for creative applications.”
London and San Francisco-based Stability AI, the company that developed Stable Diffusion, an image-generating open-source AI software, has just announced the release of Stable Diffusion 2.0, as per a press statement on the company’s website.
What is Stable Diffusion?
Stability AI
The company’s new open-source offering provides new features and improvements over the 1.0 release, including new text-to-image models trained on a new encoder called OpenCLIP that improves the quality of the generated images.
Astronomers at Caltech have used a machine learning algorithm to classify 1,000 supernovae completely autonomously. The algorithm was applied to data captured by the Zwicky Transient Facility, or ZTF, a sky survey instrument based at Caltech’s Palomar Observatory.
“We needed a helping hand, and we knew that once we trained our computers to do the job, they would take a big load off our backs,” says Christoffer Fremling, a staff astronomer at Caltech and the mastermind behind the new algorithm, dubbed SNIascore. “SNIascore classified its first supernova in April 2021, and, a year and a half later, we are hitting a nice milestone of 1,000 supernovae.”
ZTF scans the night skies every night to look for changes called transient events. This includes everything from moving asteroids to black holes that have just eaten stars to exploding stars known as supernovae. ZTF sends out hundreds of thousands of alerts a night to astronomers around the world, notifying them of these transient events. The astronomers then use other telescopes to follow up and investigate the nature of the changing objects. So far, ZTF data have led to the discovery of thousands of supernovae.
Integrating robots into walls, ceilings, furniture, and appliances could radically change our indoor spaces.
Human-robot hybrids are advancing quickly, but the applications aren’t just for complete synthetic humans. There’s a lot we can learn about ourselves in the process.
Hosted by: Hank Green.
SciShow has a spinoff podcast! It’s called SciShow Tangents. Check it out at http://www.scishowtangents.org.
———
Head to https://scishowfinds.com/ for hand selected artifacts of the universe!
———
Support SciShow by becoming a patron on Patreon: https://www.patreon.com/scishow.
———
Dooblydoo thanks go to the following Patreon supporters: Lazarus G, Sam Lutfi, D.A. Noe, سلطان الخليفي, Piya Shedden, KatieMarie Magnone, Scott Satovsky Jr, Charles Southerland, Patrick D. Ashmore, Tim Curwick, charles george, Kevin Bealer, Chris Peters.
———
Looking for SciShow elsewhere on the internet?
Facebook: http://www.facebook.com/scishow.
Twitter: http://www.twitter.com/scishow.
Tumblr: http://scishow.tumblr.com.
Instagram: http://instagram.com/thescishow.
———
Sources:
http://robotics.sciencemag.org/content/3/18/eaat4440
https://www.iis.u-tokyo.ac.jp/en/news/2916/
http://www.stroke.org/we-can-help/survivors/stroke-recovery/…emiparesis.
http://brainfoundation.org.au/images/stories/applicant_essay…_Terry.pdf.
https://www.ncbi.nlm.nih.gov/pubmedhealth/PMHT0027058/
https://training.seer.cancer.gov/anatomy/muscular/structure.html.
https://biodesign.seas.harvard.edu/soft-robotics.
https://www.nature.com/articles/nature14543
Images:
https://commons.wikimedia.org/wiki/File: Repliee_Q2_face.jpg.
Deep Learning AI Specialization: https://imp.i384100.net/GET-STARTED
Nvidia unveils its new artificial intelligence 3D model maker for game design uses text or photo input to output a 3D mesh and can also edit and adjust 3D models with text descriptions. New video style transfer from Nvidia uses CLIP to convert the style of 3D models and photos. New differential equation-based neural network machine learning AI from MIT solves brain dynamics.
AI News Timestamps:
0:00 Nvidia AI Turns Text To 3D Model Better Than Google.
2:03 Nvidia 3D Object Style Transfer AI
4:56 New Machine Learning AI From MIT
#nvidia #ai #3D
MBW’s Stat Of The Week is a series in which we highlight a single data point that deserves the attention of the global music industry. Stat Of the Week is supported by Cinq Music Group, a technology-driven record label, distribution, and rights management company.
Continue to article…The use of artificial intelligence-created music just moved up a gear.
It is our pleasure to announce the open-source release of Stable Diffusion Version 2.
The original Stable Diffusion V1 led by CompVis changed the nature of open source AI models and spawned hundreds of other models and innovations all over the world. It had one of the fastest climbs to 10K Github stars of any software, rocketing through 33K stars in less than two months.
Human behaviour is remarkably complex. Even a simple request like, “Put the ball close to the box” still requires deep understanding of situated intent and language. The meaning of a word like ‘close’ can be difficult to pin down – placing the ball inside the box might technically be the closest, but it’s likely the speaker wants the ball placed next to the box. For a person to correctly act on the request, they must be able to understand and judge the situation and surrounding context.
Most artificial intelligence (AI) researchers now believe that writing computer code which can capture the nuances of situated interactions is impossible. Alternatively, modern machine learning (ML) researchers have focused on learning about these types of interactions from data. To explore these learning-based approaches and quickly build agents that can make sense of human instructions and safely perform actions in open-ended conditions, we created a research framework within a video game environment.
Today, we’re publishing a paper and collection of videos, showing our early steps in building video game AIs that can understand fuzzy human concepts – and therefore, can begin to interact with people on their own terms.