Toggle light / dark theme

Lockheed Martin’s new hypersonic plane is expected to travel at Mach 6.

The Lockheed Martin SR-72, which is rumored to be the world’s fastest plane, is expected to make a test flight in 2025, eight years after its private proposal in 2013.

SR-72 will be the successor of the SR-71 Blackbird, the fastest manned aircraft which smashed speed records in 1974 and was retired by the U.S. Air Force back in 1998.

The SR-72, or “Son of Blackbird” is envisioned as an unmanned, hypersonic and reusable, reconnaissance, surveillance, and strike aircraft. The striking ability of the aircraft comes to the fore as it will, reportedly, support Lockheed Martin’s novel High-Speed Strike Weapon (HSSW). The aircraft’s combat capabilities enable it to strike its target in dangerous environments that are deemed risky for slower manned aircraft.

Since the technology to build the aircraft was overly ambitious when the project was announced in 2013, the project had to wait for several years.

Full Story:


4D printing works the same as 3D printing, the only difference is that the printing material allows the object to change shape based on environmental factors.

In this case, the bots’ hydrogel material allows them to morph into different shapes when they encounter a change in pH levels — and cancer cells, as it happens, are usually more acidic than normal cells.

The microrobots were then placed in an iron oxide solution, to give them a magnetic charge.

This combination of shape-shifting and magnetism means the bots could become assassins for cancer — destroying tumors without the usual collateral damage on the rest of the body.

Full Story:


A school of fish-y microbots could one day swim through your veins and deliver medicine to precise locations in your body — and cancer patients may be the first people to benefit from this revolution in nanotechnology.

PARIS, Dec. 23, 2021 – LightOn announces the integration of one of its photonic co-processors in the Jean Zay supercomputer, one of the Top500 most powerful computers in the world. Under a pilot program with GENCI and IDRIS, the insertion of a cutting-edge analog photonic accelerator into High Performance Computers (HPC) represents a technological breakthrough and a world-premiere. The LightOn photonic co-processor will be available to selected users of the Jean Zay research community over the next few months.

LightOn’s Optical Processing Unit (OPU) uses photonics to speed up randomized algorithms at a very large scale while working in tandem with standard silicon CPU and NVIDIA latest A100 GPU technology. The technology aims to reduce the overall computing time and power consumption in an area that is deemed “essential to the future of computational science and AI for Science” according to a 2021 U.S. Department of Energy report on “Randomized Algorithms for Scientific Computing.”

INRIA (France’s Institute for Research in Computer Science and Automation) researcher Dr. Antoine Liutkus provided additional context to the integration of LightOn’s coprocessor in the Jean Zay supercomputer: “Our research is focused today on the question of large-scale learning. Integrating an OPU in one of the most powerful nodes of Jean Zay will give us the keys to carry out this research, and will allow us to go beyond a simple ” proof of concept.”

Agility Robotics’ Cassie just became the first bipedal robot to complete an outdoor 5K run, completing the jaunt on a single charge.


Agility Robotics’ Cassie just became the first bipedal robot to complete an outdoor 5K run — and it did so untethered and on a single charge.

The challenge: To create robots that can seamlessly integrate into our world, it makes sense to design those robots to walk like we do. That should make it easier for them to navigate our homes and workplaces.

But bipedal robots are inherently less balanced than bots with three or more legs, so creating one that can stably walk, let alone run or climb stairs, has been a major challenge — but AI is helping researchers solve it.

KEAR (Knowledgeable External Attention for commonsense Reasoning) —along with recent milestones in computer vision and neural text-to-speech —is part of a larger Azure AI mission to provide relevant, meaningful AI solutions and services that work better for people because they better capture how people learn and work—with improved vision, knowledge understanding, and speech capabilities. At the center of these efforts is XYZ-code, a joint representation of three cognitive attributes: monolingual text (X), audio or visual sensory signals (Y), and multilingual (Z). For more information about these efforts, read the XYZ-code blog post.

Last month, our Azure Cognitive Services team, comprising researchers and engineers with expertise in AI, achieved a groundbreaking milestone by advancing commonsense language understanding. When given a question that requires drawing on prior knowledge and five answer choices, our latest model— KEAR, Knowledgeable External Attention for commonsense Reasoning —performs better than people answering the same question, calculated as the majority vote among five individuals. KEAR reaches an accuracy of 89.4 percent on the CommonsenseQA leaderboard compared with 88.9 percent human accuracy. While the CommonsenseQA benchmark is in English, we follow a similar technique for multilingual commonsense reasoning and topped the X-CSR leaderboard.

Although recent large deep learning models trained with big data have made significant breakthroughs in natural language understanding, they still struggle with commonsense knowledge about the world, information that we, as people, have gathered in our day-to-day lives over time. Commonsense knowledge is often absent from task input but is crucial for language understanding. For example, take the question “What is a treat that your dog will enjoy?” To select an answer from the choices salad, petted, affection, bone, and lots of attention, we need to know that dogs generally enjoy food such as bones for a treat. Thus, the best answer would be “bone.” Without this external knowledge, even large-scale models may generate incorrect answers. For example, the DeBERTa language model selects “lots of attention,” which is not as good an answer as “bone.”

The contemporaneous development in recent years of deep neural networks, hardware accelerators with large memory capacity and massive training datasets has advanced the state-of-the-art on tasks in fields such as computer vision and natural language processing. Today’s deep learning (DL) systems however remain prone to issues such as poor robustness, inability to adapt to novel task settings, and requiring rigid and inflexible configuration assumptions. This has led researchers to explore the incorporation of ideas from collective intelligence observed in complex systems into DL methods to produce models that are more robust and adaptable and have less rigid environmental assumptions.

In the new paper Collective Intelligence for Deep Learning: A Survey of Recent Developments, a Google Brain research team surveys historical and recent neural network research on complex systems and the incorporation of collective intelligence principles to advance the capabilities of deep neural networks.

Collective intelligence can manifest in complex systems as self-organization, emergent behaviours, swarm optimization, and cellular systems; and such self-organizing behaviours can also naturally arise in artificial neural networks. The paper identifies and explores four DL areas that show close connections with collective intelligence: image processing, deep reinforcement learning, multi-agent learning, and meta-learning.

A research team, led by Assistant Professor Desmond Loke from the Singapore University of Technology and Design (SUTD), has developed a new type of artificial synapse based on two-dimensional (2D) materials for highly scalable brain-inspired computing.

Brain-inspired computing, which mimics how the human brain functions, has drawn significant scientific attention because of its uses in artificial intelligence functions and low energy consumption. For brain-inspired computing to work, synapses remembering the connections between two neurons are necessary, like .

In developing brains, synapses can be grouped into functional synapses and silent synapses. For functional synapses, the synapses are active, while for silent synapses, the synapses are inactive under normal conditions. And, when silent synapses are activated, they can help to optimize the connections between neurons. However, as artificial synapses built on typically occupy large spaces, there are usually limitations in terms of hardware efficiency and costs. As the human brain contains about a hundred trillion synapses, it is necessary to improve the hardware cost in order to apply it to smart portable devices and internet-of things (IoTs).

Autonomous weapon systems—commonly known as killer robots—may have killed human beings for the first time ever last year, according to a recent United Nations Security Council report on the Libyan civil war. History could well identify this as the starting point of the next major arms race, one that has the potential to be humanity’s final one.

The United Nations Convention on Certain Conventional Weapons debated the question of banning at its once-every-five-years review meeting in Geneva Dec. 13–17, 2021, but didn’t reach consensus on a ban. Established in 1983, the convention has been updated regularly to restrict some of the world’s cruelest conventional weapons, including land mines, booby traps and incendiary weapons.

Autonomous weapon systems are robots with lethal weapons that can operate independently, selecting and attacking targets without a human weighing in on those decisions. Militaries around the world are investing heavily in autonomous weapons research and development. The U.S. alone budgeted US$18 billion for autonomous weapons between 2016 and 2020.

Some of the best circuits to drive AI in the future may be analog, not digital, and research teams around the world are increasingly developing new devices to support such analog AI.

The most basic computation in the deep neural networks driving the current explosion in AI is the multiply-accumulate (MAC) operation. Deep neural networks are composed of layers of artificial neurons, and in MAC operations, the output of each one of these layers is multiplied by the values of the strengths or “weights” of their connections to the next layer, which then sums up these contributions.

Modern computers have digital components devoted to MAC operations, but analog circuits theoretically can perform these computations for orders of magnitude less energy. This strategy—known as analog AI, compute-in-memory or processing-in-memory—often performs these multiply-accumulate operations using non-volatile memory devices such as flash, magnetoresistive RAM (MRAM), resistive RAM (RRAM), phase-change memory (PCM) and even more esoteric technologies.