Toggle light / dark theme

Huaqiangbei, the world’s largest electronics wholesale market area in the Chinese technology hub of Shenzhen, has become the latest Wonderland for geeks, the way Tokyo’s Akihabara was to otaku during the tech bubble at the turn of the millennium. Amid the warren of closet-sized shops and makeshift stalls, the latest catalogue of smartphones, LED lights, holograms, electronic parts and every type of gadget imaginable compete for attention and the spending yuan of consumers.


Shenzhen has become an international hotspot for the unmanned aerial vehicle industry, following the global success of drone giant DJI.

The company working on a militarized version of BD’s Spot. It looks like just a straight copy. Sadly, no FB page, and didn’t see any other robots on there besides this. I think some humanoid robot competition would be helpful.


© 2020 Ghost Robotics Corporation® Ghost Robotics & Logo are registered trademarks.

Privacy & Legal

While a Mars rover can explore where no person has gone before, a smaller robot at the University of the Sunshine Coast in Australia could climb to new heights by mimicking the movements of a lizard.

Simply named X-4, the university’s climbing has allowed a team of researchers to test and replicate how a lizard moves in the hope that their findings will inspire next-generation robotics design for disaster relief, remote surveillance and possibly even space exploration.

In a published today in Proceedings of the Royal Society B, the team states that have optimized their movement across difficult terrain over many years of evolution.

Robot swarms have, to date, been constructed from artificial materials. Motile biological constructs have been created from muscle cells grown on precisely shaped scaffolds. However, the exploitation of emergent self-organization and functional plasticity into a self-directed living machine has remained a major challenge. We report here a method for generation of in vitro biological robots from frog (Xenopus laevis) cells. These xenobots exhibit coordinated locomotion via cilia present on their surface. These cilia arise through normal tissue patterning and do not require complicated construction methods or genomic editing, making production amenable to high-throughput projects.

A team of researchers from the Harbin Institute of Technology along with partners at the First Affiliated Hospital of Harbin Medical University, both in China, has developed a tiny robot that can ferry cancer drugs through the blood-brain barrier (BBB) without setting off an immune reaction. In their paper published in the journal Science Robotics, the group describes their robot and tests with mice. Junsun Hwang and Hongsoo Choi, with the Daegu Gyeongbuk Institute of Science and Technology in Korea, have published a Focus piece in the same journal issue on the work done by the team in China.

For many years, medical scientists have sought ways to deliver drugs to the brain to treat health conditions such as brain cancers. Because the brain is protected by the skull, it is extremely difficult to inject them directly. Researchers have also been stymied in their efforts by the BBB—a filtering mechanism in the capillaries that supply blood to the brain and that blocks foreign substances from entering. Thus, simply injecting drugs into the bloodstream is not an option. In this new effort, the researchers used a defense cell type that naturally passes through the BBB to carry drugs to the brain.

To build their tiny robots, the researchers exposed groups of white blood cells called neutrophils to tiny bits of magnetic nanogel particles coated with fragments of E. coli material. Upon exposure, the neutrophils naturally encased the tiny robots, believing them to be nothing but E. coli bacteria. The microrobots were then injected into the bloodstream of a test mouse with a cancerous tumor. The team then applied a to the robots to direct them through the BBB, where they were not attacked, as the identified them as normal neutrophils, and into the brain and the tumor. Once there, the robots released their cancer-fighting drugs.

Michael I. Jordan explains why today’s artificial-intelligence systems aren’t actually intelligent.


THE INSTITUTE Artificial-intelligence systems are nowhere near advanced enough to replace humans in many tasks involving reasoning, real-world knowledge, and social interaction. They are showing human-level competence in low-level pattern recognition skills, but at the cognitive level they are merely imitating human intelligence, not engaging deeply and creatively, says Michael I. Jordan, a leading researcher in AI and machine learning. Jordan is a professor in the department of electrical engineering and computer science, and the department of statistics, at the University of California, Berkeley.

He notes that the imitation of human thinking is not the sole goal of machine learning—the engineering field that underlies recent progress in AI—or even the best goal. Instead, machine learning can serve to augment human intelligence, via painstaking analysis of large data sets in much the way that a search engine augments human knowledge by organizing the Web. Machine learning also can provide new services to humans in domains such as health care, commerce, and transportation, by bringing together information found in multiple data sets, finding patterns, and proposing new courses of action.

“People are getting confused about the meaning of AI in discussions of technology trends—that there is some kind of intelligent thought in computers that is responsible for the progress and which is competing with humans,” he says. “We don’t have that, but people are talking as if we do.”

Colonel Mark M. Zais, chief data scientist at United States Special Operations Command (USSOCOM) stresses the importance of AI-related education in the DOD. In his 2020 CJCS Strategic Essay Competition first place Strategy Article, he says, “Without that education, we face a world where senior leaders use AI-enabled technologies to make decisions related to national security without a full grasp of the tools that they—and our adversaries—possess.”


With the release of its first artificial intelligence (AI) strategy in 2019, the Department of Defense (DOD) formalized the increased use of AI technology throughout the military, challenging senior leaders to create “organizational AI strategies” and “make related resource allocation decisions.”1 Unfortunately, most senior leaders currently have limited familiarity with AI, having developed their skills in tactical counterinsurgency environments, which reward strength (physical and mental), perseverance, and diligence. Some defense scholars have advocated a smarter military, emphasizing intellectual human capital and arguing that cognitive ability will determine success in strategy development, statesmanship, and decisionmaking.2 AI might complement that ability but cannot be a substitute for it. Military leaders must leverage AI to help them adapt and be curious. As innovative technologies with AI applications increasingly become integral to DOD modernization and near-peer competition, senior leaders’ knowledge of AI is critical for shaping and applying our AI strategy and creating properly calibrated expectations.

War is about decisionmaking, and AI enables the technology that will transform how humans and machines make those decisions.3 Successful use of this general-purpose technology will require senior leaders who truly understand its capabilities and can demystify the hyperbole.4 Within current AI strategy development and application, many practitioners have a palpable sense of dread as we crest the waves of a second AI hype cycle, seemingly captained by novices of the AI seas.5 In-house technical experts find it difficult to manage expectations and influence priorities, clouded by buzzwords and stifled by ambitions for “quick wins.” The importance of AI-related education increases with AI aspirations and the illusion of progress. Without that education, we face a world where senior leaders use AI-enabled technologies to make decisions related to national security without a full grasp of the tools that they—and our adversaries—possess. This would be equivalent to a combat arms officer making strategic military landpower decisions without the foundations of military education in maneuver warfare and practical experience.

Strategic decisionmaking in a transformative digital environment requires comparably transformative leadership. Modernization of the military workforce should parallel modernization of equipment and technology. In the short term, senior leaders require executive AI education that equips them with enough knowledge to distill problems that need AI solutions and that provides informed guidance for customized solutions. With the ability to trust internal expertise, the military can avoid overreliance on consultants and vendors, following Niccolò Machiavelli’s warning against dependence on auxiliary troops.6 In the long term, military education should give the same attention to AI that is provided to traditional subjects such as maneuver warfare and counterinsurgency operations. Each steppingstone of military education should incorporate subjects from the strategic domain, including maneuver warfare, information warfare, and artificial intelligence.

Pugs, Ferraris, mountains, brunches, beaches, and babies — Instagram is full of them. In fact, it’s become one of the largest image databases on the planet over the last decade and the company’s owner, Facebook, is using this treasure trove to teach machines what’s in a photo.

Facebook announced on Thursday that it had built an artificial intelligence program that can “see” what it is looking at. It did this by feeding it over 1 billion public images from Instagram.

The “computer vision” program, nicknamed SEER, outperformed existing AI models in an object recognition test, Facebook said.