Toggle light / dark theme

As progress in traditional computing slows, new forms of computing are coming to the forefront. At Penn State, a team of engineers is attempting to pioneer a type of computing that mimics the efficiency of the brain’s neural networks while exploiting the brain’s analog nature.

Modern computing is digital, made up of two states, on-off or one and zero. An analog computer, like the , has many possible states. It is the difference between flipping a light switch on or off and turning a dimmer switch to varying amounts of lighting.

Neuromorphic or brain-inspired computing has been studied for more than 40 years, according to Saptarshi Das, the team leader and Penn State assistant professor of engineering science and mechanics. What’s new is that as the limits of digital computing have been reached, the need for high-speed image processing, for instance for self-driving cars, has grown. The rise of big data, which requires types of pattern recognition for which the brain architecture is particularly well suited, is another driver in the pursuit of neuromorphic computing.

Microsoft has announced the launch of the public preview of a free app that allows users to train machine learning (ML) models without writing any code.

This app — Lobe — has been designed for Windows and Mac, only supports image classification; however, the tech giant is planning to expand the app to include other models and data types in the future.

According to Lobe website, the app needs to be shown examples of what the users want to learn, and the app automatically trains a custom machine learning model that can be shipped in the users’ app.

Elon Musk is on the record stating that artificial superintelligence or ASI could bring the end of the human race. Elon has publicly expressed concern about AI many times now. He thinks the advent of a digital superintelligence is the most pressing issue for humanity to get right.

What happens when machines surpass humans in general intelligence? If machine brains surpassed human brains in general intelligence, then this new superintelligence would have undergone an event called the intelligence explosion, likely to occur in the 21st century. It is unknown what, or who this machine-network would become; The issue of superintelligence remains peripheral to mainstream AI research and is mostly discussed by a small group of academics.

Besides Elon Musk, Swedish philosopher Nick Bostrom is also among well known public thinkers who is worried about AI. He lays the foundation for understanding the future of humanity and intelligent life : Now imagine a machine, structurally similar to a brain but with immense hardness and flexibility, designed from the bottom scratch to function as an intelligent agent. Given sufficiently long time, a machine like this could acquire enormous knowledge and skills, surpassing human intellectual capacity in virtually every field. At that point the machine would have become superintelligent. With other words the machine’s intellectual capacities would exceed those of all of humanity put together by a very large margin. This would represent the most radical change in the history of life on earth.

In order to develop a superintelligence that would benefit humanity, the process has to be done in a series of steps with each step being determined before we move to the next one. In fact, it might just be possible to program the AI to help us achieve the things we humans may not be able to do on our own. It is not simply being able to create them and learning how they’ve been commanded, but it is interacting with them and evolving ourselves at the same time. It is learning how to be human after the first ASI.

Are you a cutting-edge AI researcher looking for models with clean semantics that can represent the context-specific causal dependencies necessary for causal induction? If so, maybe you should take a look at good old-fashioned probability trees.

Probability trees may have been around for decades, but they have received little attention from the AI and ML community. Until now. “Probability trees are one of the simplest models of causal generative processes,” explains the new DeepMind paper Algorithms for Causal Reasoning in Probability Trees, which the authors say is the first to propose concrete algorithms for causal reasoning in discrete probability trees.

Humans naturally learn to reason in large part through inducing causal relationships from our observations, and we do this remarkably well, cognitive scientists say. Even when the data we perceive is sparse and limited, humans can quickly learn causal structures such as interactions between physical objects, observations of the co-occurrence frequencies between causes and effects, etc.

One of the major updates to the latest version of Photoshop is the addition of Sky Replacement: a tool that has the potential to save you a ton of time when editing your landscape images. But as Aaron Nace explains in this video, this AI-powered tool requires a bit of thought if you want to get professional results.

AI-powered photo editing tools are always sold as “one click” or “a few clicks” solutions that can transform a photo with next-to-no input from you. But even with the most advanced machine learning available, no automated tool can generate fool-proof results without a little bit of thought from the creator on the other end of that mouse.

Three actions policymakers and business leaders can take today.


New developments in AI could spur a massive democratization of access to services and work opportunities, improving the lives of millions of people around the world and creating new commercial opportunities for businesses. Yet they also raise the specter of potential new social divides and biases, sparking a public backlash and regulatory risk for businesses. For the U.S. and other advanced economies, which are increasingly fractured along income, racial, gender, and regional lines, these questions of equality are taking on a new urgency. Will advances in AI usher in an era of greater inclusiveness, increased fairness, and widening access to healthcare, education, and other public services? Or will they instead lead to new inequalities, new biases, and new exclusions?

Three frontier developments stand out in terms of both their promised rewards and their potential risks to equality. These are human augmentation, sensory AI, and geographic AI.

Human Augmentation

Certain cyber-artificial intelligence attacks could pose an existential threat to the US and the West, former US cyber command chief, Maj.-Gen. (ret.) Brett Williams said on Tuesday.

Speaking as part of Cybertech’s virtual conference, Williams said, “artificial intelligence is the real thing. It is already in use by attackers. When they learn how to do deepfakes, I would argue this is potentially an existential threat.”

As the Defense Advanced Research Projects Agency (DARPA) explores designs for a ship that could operate without humans aboard, the agency is keeping the Navy involved in the effort to ensure it progresses forward should the program’s work succeed.

While the Navy is creating unmanned surface vehicles based off designs meant for ships that could bring humans aboard, the No Manning Required Ship (NOMARS) program is the first to pursue a design that takes humans out of the calculation.

Gregory Avicola, the NOMARS program manager, told USNI News in a recent interview that DARPA has had conversations with Navy offices like PMS-406, the service’s program executive office for unmanned and small combatants, and the Surface Development Squadron, which has been tasked with developing the concept of operations for unmanned surface vehicles, since the agency started the NOMARS initiative.