Toggle light / dark theme

Elon Musk’s Message on Artificial Superintelligence — ASI

Elon Musk is on the record stating that artificial superintelligence or ASI could bring the end of the human race. Elon has publicly expressed concern about AI many times now. He thinks the advent of a digital superintelligence is the most pressing issue for humanity to get right.

What happens when machines surpass humans in general intelligence? If machine brains surpassed human brains in general intelligence, then this new superintelligence would have undergone an event called the intelligence explosion, likely to occur in the 21st century. It is unknown what, or who this machine-network would become; The issue of superintelligence remains peripheral to mainstream AI research and is mostly discussed by a small group of academics.

Besides Elon Musk, Swedish philosopher Nick Bostrom is also among well known public thinkers who is worried about AI. He lays the foundation for understanding the future of humanity and intelligent life : Now imagine a machine, structurally similar to a brain but with immense hardness and flexibility, designed from the bottom scratch to function as an intelligent agent. Given sufficiently long time, a machine like this could acquire enormous knowledge and skills, surpassing human intellectual capacity in virtually every field. At that point the machine would have become superintelligent. With other words the machine’s intellectual capacities would exceed those of all of humanity put together by a very large margin. This would represent the most radical change in the history of life on earth.

In order to develop a superintelligence that would benefit humanity, the process has to be done in a series of steps with each step being determined before we move to the next one. In fact, it might just be possible to program the AI to help us achieve the things we humans may not be able to do on our own. It is not simply being able to create them and learning how they’ve been commanded, but it is interacting with them and evolving ourselves at the same time. It is learning how to be human after the first ASI.

#ElonMusk #AI #ASI

SUBSCRIBE to our channel “Science Time”: https://www.youtube.com/sciencetime24

DeepMind Introduces Algorithms for Causal Reasoning in Probability Trees

Are you a cutting-edge AI researcher looking for models with clean semantics that can represent the context-specific causal dependencies necessary for causal induction? If so, maybe you should take a look at good old-fashioned probability trees.

Probability trees may have been around for decades, but they have received little attention from the AI and ML community. Until now. “Probability trees are one of the simplest models of causal generative processes,” explains the new DeepMind paper Algorithms for Causal Reasoning in Probability Trees, which the authors say is the first to propose concrete algorithms for causal reasoning in discrete probability trees.

Humans naturally learn to reason in large part through inducing causal relationships from our observations, and we do this remarkably well, cognitive scientists say. Even when the data we perceive is sparse and limited, humans can quickly learn causal structures such as interactions between physical objects, observations of the co-occurrence frequencies between causes and effects, etc.

How to Get Professional Results with Photoshop’s AI Sky Replacement Tool

One of the major updates to the latest version of Photoshop is the addition of Sky Replacement: a tool that has the potential to save you a ton of time when editing your landscape images. But as Aaron Nace explains in this video, this AI-powered tool requires a bit of thought if you want to get professional results.

AI-powered photo editing tools are always sold as “one click” or “a few clicks” solutions that can transform a photo with next-to-no input from you. But even with the most advanced machine learning available, no automated tool can generate fool-proof results without a little bit of thought from the creator on the other end of that mouse.

Unlocking AI’s Potential for Social Good

Three actions policymakers and business leaders can take today.


New developments in AI could spur a massive democratization of access to services and work opportunities, improving the lives of millions of people around the world and creating new commercial opportunities for businesses. Yet they also raise the specter of potential new social divides and biases, sparking a public backlash and regulatory risk for businesses. For the U.S. and other advanced economies, which are increasingly fractured along income, racial, gender, and regional lines, these questions of equality are taking on a new urgency. Will advances in AI usher in an era of greater inclusiveness, increased fairness, and widening access to healthcare, education, and other public services? Or will they instead lead to new inequalities, new biases, and new exclusions?

Three frontier developments stand out in terms of both their promised rewards and their potential risks to equality. These are human augmentation, sensory AI, and geographic AI.

Human Augmentation

Variously described as biohacking or Human 2.0, human augmentation technologies have the potential to enhance human performance for good or ill.

Ex-US cyber command chief: Enemies using AI is ‘existential threat’

Certain cyber-artificial intelligence attacks could pose an existential threat to the US and the West, former US cyber command chief, Maj.-Gen. (ret.) Brett Williams said on Tuesday.

Speaking as part of Cybertech’s virtual conference, Williams said, “artificial intelligence is the real thing. It is already in use by attackers. When they learn how to do deepfakes, I would argue this is potentially an existential threat.”

DARPA Testing the Limits of Unmanned Ships in New NOMARS Program

As the Defense Advanced Research Projects Agency (DARPA) explores designs for a ship that could operate without humans aboard, the agency is keeping the Navy involved in the effort to ensure it progresses forward should the program’s work succeed.

While the Navy is creating unmanned surface vehicles based off designs meant for ships that could bring humans aboard, the No Manning Required Ship (NOMARS) program is the first to pursue a design that takes humans out of the calculation.

Gregory Avicola, the NOMARS program manager, told USNI News in a recent interview that DARPA has had conversations with Navy offices like PMS-406, the service’s program executive office for unmanned and small combatants, and the Surface Development Squadron, which has been tasked with developing the concept of operations for unmanned surface vehicles, since the agency started the NOMARS initiative.