Toggle light / dark theme

The greatest artistic tool ever built, or a harbinger of doom for entire creative industries? OpenAI’s second-generation DALL-E 2 system is slowly opening up to the public, and its text-based image generation and editing abilities are awe-inspiring.

The pace of progress in the field of AI-powered text-to-image generation is positively frightening. The generative adversarial network, or GAN, first emerged in 2014, putting forth the idea of two AIs in competition with one another, both “trained” by being shown a huge number of real images, labeled to help the algorithms learn what they’re looking at. A “generator” AI then starts to create images, and a “discriminator” AI tries to guess if they’re real images or AI creations.

At first, they’re evenly matched, both being absolutely terrible at their jobs. But they learn; the generator is rewarded if it fools the discriminator, and the discriminator is rewarded if it correctly picks the origin of an image. Over millions and billions of iterations – each taking a matter of seconds – they improve to the point where humans start struggling to tell the difference.

By Natasha Vita-More.

Has the technological singularity in 2019 changed since the late 1990s?

As a theoretical concept it has become more recognized. As a potential threat, it is significantly written about and talked about. Because the field of narrow AI is growing and machine learning has found a place in academics and entrepreneurs are investing in the growth of AI, tech leaders have come to the table and voiced their concerns, especially Bill Gates, Elon Musk, and the late Stephen Hawking. The concept of existential risk has taken a central position within the discussions about AI and machine ethicists are prepping their arguments toward a consensus that near-future robots will force us to rethink the exponential advances in the fields of robotics and computer science. Here it is crucial for those leaders in philosophy and ethics to address the concept of what an ethical machine means and the true goal of machine ethics.

Explains why we can meet aliens soon. He is on to something. Elon Musk disagrees with the research that argues that there are not aliens,. Elon Musk explains why drake equation is important and why Fermi paradox is wrong.

SUBSCRIBE IF YOU LIKED THIS VIDEO
╔═╦╗╔╦╗╔═╦═╦╦╦╦╗╔═╗
║╚╣║║║╚╣╚╣╔╣╔╣║╚╣═╣
╠╗║╚╝║║╠╗║╚╣║║║║║═╣
╚═╩══╩═╩═╩═╩╝╚╩═╩═╝

Gate that Aliens weren’t able to overcome 👉 https://youtu.be/llBm-4IGI9k.

Elon Musk Destroys Apple 👉 https://youtu.be/MXIswmG5xyE

Elon Musk — “Delete Your Facebook” 👉 https://youtu.be/HA7bhpDaQ3Q

Elon Musk: I Will Tell You All about The Aliens: 👉 https://youtu.be/d8RBC3F2kC8

Why we need AI to compete against each other. Does a Great Filter Stop all Alien Civilizations at some point? Are we Doomed if We Find Life in Our Solar System?

David Brin is a scientist, speaker, technical consultant and world-known author. His novels have been New York Times Bestsellers, winning multiple Hugo, Nebula and other awards.
A 1998 movie, directed by Kevin Costner, was loosely based on his book The Postman.
His Ph.D in Physics from UCSD — followed a masters in optics and an undergraduate degree in astrophysics from Caltech. He was a postdoctoral fellow at the California Space Institute and the Jet Propulsion Laboratory.
Brin serves on advisory committees dealing with subjects as diverse as national defense and homeland security, astronomy and space exploration, SETI and nanotechnology, future/prediction and philanthropy. He has served since 2010 on the council of external advisers for NASA’s Innovative and Advanced Concepts group (NIAC), which supports the most inventive and potentially ground-breaking new endeavors.

https://www.davidbrin.com/books.html.
https://twitter.com/DavidBrin.
https://www.newsweek.com/soon-humanity-wont-alone-universe-opinion-1717446

Youtube Membership: https://www.youtube.com/channel/UCz3qvETKooktNgCvvheuQDw/join.
Podcast: https://anchor.fm/john-michael-godier/subscribe.
Apple: https://apple.co/3CS7rjT

More JMG
https://www.youtube.com/c/JohnMichaelGodier.

Want to support the channel?

What does the future of AI look like? Let’s try out some AI software that’s readily available for consumers and see how it holds up against the human brain.

🦾 AI can outperform humans. But at what cost? 👉 👉 https://cybernews.com/editorial/ai-can-outperform-humans-but-at-what-cost/

Whether you welcome our new AI overlords with open arms, or you’re a little terrified about what an AI future may look like, many say it’s not really a question of ‘if,’ but more of a question of ‘when.’

Okay, you’ve got AI technologies on a small scale to a grand scale. From Siri — self-driving cars, text generators — humanoid robots, but what really is the real threat? As far back as 2013, Oxford University (ironically) used a machine-learning algorithm to determine whether 702 different jobs throughout America could turn automated, this found that a whopping 47% could in fact be replaced by machines.

A huge concern that comes alongside this is whether the technology will be reliable enough? We’re already seeing AI technology in countless professions, most recently the boom of AI generated-text used in over 300 different apps. It’s even used beyond this planet, out in space. If anything, this is a rude awakening for the future potential of AI technology, outside of the industrial market.

🦾 Do humans stand a chance against AI technology?

Shockwaves caused by asteroids colliding with Earth create materials with a range of complex carbon structures, which could be used for advancing future engineering applications, according to an international study led by UCL and Hungarian scientists.

Published today in Proceedings of the National Academy of Sciences, the team of researchers has found that formed during a high-energy shock wave from an around 50,000 years ago have unique and exceptional properties, caused by the short-term high temperatures and extreme pressure.

The researchers say that these structures can be targeted for advanced mechanical and electronic applications, giving us the ability to design materials that are not only ultra-hard but also malleable with tunable electronic properties.

The potential to disseminate disinformation on a large scale and undermine scientifically established facts represents an existential risk to humanity. While vigorously defending the right to freedom of expression everywhere, higher education institutions must also develop the capacity to reach a shared, empirically backed consensus based on facts, science and established knowledge.

Has New York City gone too far with this public service announcement? The city’s office of Emergency Management released this ad, telling residents what to do in the event of a nuclear attack. The host tells people to take cover indoors immediately. If you’re outside when the strike occurs, residents are advised to shower, bag their clothes. Then, pay attention to local media for information and next steps. Inside Edition Digital’s Mara Montalbano has more.