Toggle light / dark theme

It has been really fun talking to the kids about AI. Should we help AI consciousness to emerge — or should we try to prevent it? Can you design a kindest AI? Can we use AI as an universal emotion translator? How to search for an AI civilization? And many many other questions that you can discuss with kids.


Ultimately, early introduction of AI is not limited to formal instruction. Just contemplating future scenarios of AI evolution provides plentiful material for engaging students with the subject. A survey on the future of AI, administered by the Future of Life Institute, is a great starting point for such discussions. Social studies classes, as well as school debate and philosophy clubs, could also launch a dialogue on AI ethics – an AI nurse selecting a medicine, an AI judge deciding on a criminal case, or an AI driverless car switching lanes to avoid collision.

Demystifying AI for our children in all its complexity while providing them with an early insight into its promises and perils will make them confident in their ability to understand and control this incredible technology, as it is bound to develop rapidly within their lifetimes.

Follow me on Twitter. Check out my website.

“I mean, I suspect we could have an army of 120,000, of which 30,000 might be robots, who knows?” Carter said, although he stressed he was not setting any particular target in terms of future numbers.

Investment in robot warfare was to be at the heart of the planned integrated five-year defence review, whose future was thrown into doubt after the chancellor, Rishi Sunak, postponed the cross-government spending review to which it had been linked last month.

Carter said negotiations with Downing Street and the Treasury about salvaging the multi-year defence funding settlement were “going on in a very constructive way” – as he lobbied in public for a long-term financial deal.

DARPA recently awarded contracts to five companies to develop algorithms enabling mixed teams of manned and unmanned combat aircraft to conduct aerial dogfighting autonomously.

Boeing, EpiSci, Georgia Tech Research Institute, Heron Systems, and physicsAI were chosen to develop air combat maneuvering algorithms for individual and team tactical behaviors under Technical Area (TA) 1 of DARPA’s Air Combat Evolution (ACE) program. Each team is tasked with developing artificial intelligence agents that expand one-on-one engagements to two-on-one and two-on-two within-visual-range aerial battles. The companies’ algorithms will be tested in each of three program phases: modeling and simulation, sub-scale unmanned aircraft, and full-scale combat representative aircraft scheduled in 2023.

“The TA1 performers include a large defense contractor, a university research institute, and boutique AI firms, who will build upon the first-gen autonomous dogfighting algorithms demonstrated in the AlphaDogfight Trials this past August,” said Air Force Col. Dan “Animal” Javorsek, program manager in DARPA’s Strategic Technology Office. “We will be evaluating how well each performer is able to advance their algorithms to handle individual and team tactical aircraft behaviors, in addition to how well they are able to scale the capability from a local within-visual-range environment to the broader, more complex battlespace.”

Spinal cord injury (SCI) is of significant concern to the Department of Defense. Of the 337,000 Americans with serious SCIs, approximately 44,000 are veterans, with 11,000 new injuries occurring each year.1 SCI is a complex condition – the injured often face lifelong paralysis and increased long-term morbidity due to factors such as sepsis and autonomic nervous system dysfunction. While considerable research efforts have been devoted toward restorative and therapeutic technologies to SCIs, significant challenges remain.

DARPA’s Bridging the Gap Plus (BG+) program aims to develop new approaches to treating SCI by integrating injury stabilization, regenerative therapy, and functional restoration. Today, DARPA announced the award of contracts to the University of California-Davis, Johns Hopkins University, and the University of Pittsburgh to advance this crucial work. Multidisciplinary teams at each of these universities are tasked with developing systems of implantable, adaptive devices that aim to reduce injury effects during early phases of SCI, and potentially restore function during the later chronic phase.

“The BG+ program looks to create opportunities to provide novel treatment approaches immediately after injury,” noted Dr. Al Emondi, BG+ program manager. “Systems will consist of active devices performing real-time biomarker monitoring and intervention to stabilize and, where possible, rebuild the neural communications pathways at the site of injury, providing the clinician with previously unavailable diagnostic information for automated or clinician-directed interventions.”

Even with decades of unprecedented development in computational power, the human brain still holds many advantages over modern computing technologies. Our brains are extremely efficient for many cognitive tasks and do not separate memory and computing, unlike standard computer chips.

In the last decade, the new paradigm of neuromorphic computing has emerged, inspired by neural networks of the brain and based on energy-efficient hardware for information processing.

To create devices that mimic what occurs in our brain’s neurons and synapses, researchers need to overcome a fundamental molecular engineering challenge: how to design devices that exhibit controllable and energy-efficient transition between different resistive states triggered by incoming stimuli.

The COVID-19 crisis has led to a significant increase in the use of cyberspace, enabling people to work together at distant places and interact with remote environments and individuals by embodying virtual avatars or real avatars such as robots. However, the limits of avatar embodiment are not clear. Furthermore, it is not clear how these embodiments affect the behaviors of humans.

Therefore, a research team comprising Takayoshi Hagiwara () and Professor Michiteru Kitazaki from Toyohashi University of Technology; Dr. Ganesh Gowrishankar (senior researcher) from UM-CNRS LIRMM; Professor Maki Sugimoto from Keio University; and Professor Masahiko Inami from The University of Tokyo aimed to develop a novel collaboration method with a shared avatar, which can be controlled concurrently by two individuals in VR, and to investigate human motor behaviors as the avatar is controlled in VR.

Full movements of two participants were monitored via a motion-capture system, and movements of the shared avatar were determined as the average of the movements of the two participants. Twenty participants (10 dyads) were asked to perform reaching movements with their towards target cubes that were presented at various locations. Participants exhibited superior reaction times with the shared avatar than individual reaction times, and the avatar’s hand movements were straighter and less jerky than those of the participants. The participants exhibited a sense of agency and body ownership towards the shared avatar although they only formed a part of the shared avatar.

An artificial intelligence technique—machine learning—is helping accelerate the development of highly tunable materials known as metal-organic frameworks (MOFs) that have important applications in chemical separations, adsorption, catalysis, and sensing.

Utilizing data about the properties of more than 200 existing MOFs, the machine learning platform was trained to help guide the development of new materials by predicting an often-essential property: water stability. Using guidance from the , researchers can avoid the time-consuming task of synthesizing and then experimentally testing new candidate MOFs for their aqueous stability. Already, researchers are expanding the model to predict other important MOF properties.

Supported by the Office of Science’s Basic Energy Sciences program within the U.S. Department of Energy (DOE), the research was reported Nov. 9 in the journal Nature Machine Intelligence. The research was conducted in the Center for Understanding and Control of Acid Gas-Induced Evolution of Materials for Energy (UNCAGE-ME), a DOE Energy Frontier Research Center located at the Georgia Institute of Technology.

A unique type of modular self-reconfiguring robotic system has been unveiled. The term is a mouthful, but it basically refers to a robotic enterprise that can construct itself out of modules that connect to one another to achieve a certain task.

There has been great interest in such machines, also referred to as MSRRs, in recent years. One recent project called simply Space Engine can construct its own physical space environment to meet living, work and recreational needs. It accomplishes those tasks by generating its own kinetic forces to move and shape such spaces. It does this through adding and removing electromagnets to shift and construct modules into optimum room shapes.

MSRRs nevertheless face some constraints. They require gender-opposite components, which is limiting in some circumstances, and the modules must coordinate trajectories to efficiently connect components during self-assembly operations. Those tasks are time consuming and the success rates for connections between modules haven’t consistently been high.