Toggle light / dark theme

In my 2015 exploration with General John R. Allen on the concept of Hyperwar, we recognized the potential of artificial intelligence to unalterably change the field of battle. Chief among the examples of autonomous systems were drone swarms, which are both a significant threat and a critical military capability. Today, Hyperwar seems to be the operative paradigm accepted by militaries the world over as a de facto reality. Indeed, the observe-orient-decide-act (OODA) loop is collapsing. Greater autonomy is being imbued in all manner of weapon systems and sensors. Work is ongoing to develop systems that further decrease reaction times and increase the mass of autonomous systems employed in conflict. This trend is highlighted potently by the U.S. Replicator initiative and China’s swift advancements in automated manufacturing and missile technologies.

The U.S. Replicator Initiative: A Commitment to Autonomous Warfare?

The Pentagon’s “Replicator” initiative is a strategic move to counter adversaries like China by rapidly producing “attritable autonomous systems” across multiple domains. Deputy Secretary of Defense Kathleen Hicks emphasized the need for platforms that are “small, smart, cheap, and many,” planning to produce thousands of such systems within 18 to 24 months. The Department of Defense, under this initiative, is developing smaller, more intelligent, and cost-effective platforms, a move that aligns with the creation of a Hyperwar environment.

Google’s company-defining effort to catch up to ChatGPT creator OpenAI is turning out to be harder than expected.

Google representatives earlier this year told some cloud customers and business partners they would get access to the company’s new conversational AI, a large language model known as Gemini, by November. But the company recently told them not to expect it until the first quarter of next year, according to two people with direct knowledge. The delay comes at a bad time for Google, whose cloud sales growth has slowed while that of its bigger rival, Microsoft, has accelerated. Part of Microsoft’s success has come from selling OpenAI’s technology to its customers.

To teach an AI agent a new task, like how to open a kitchen cabinet, researchers often use reinforcement learning—a trial-and-error process where the agent is rewarded for taking actions that get it closer to the goal.

In many instances, a human expert must carefully design a reward function, which is an incentive mechanism that gives the agent motivation to explore. The human expert must iteratively update that reward function as the agent explores and tries different actions. This can be time-consuming, inefficient, and difficult to scale up, especially when the is complex and involves many steps.

Researchers from MIT, Harvard University, and the University of Washington have developed a new reinforcement learning approach that doesn’t rely on an expertly designed reward function. Instead, it leverages crowdsourced , gathered from many non-expert users, to guide the agent as it learns to reach its goal. The work has been published on the pre-print server arXiv.

A team of AI researchers from EquiLibre Technologies, Sony AI, Amii and Midjourney, working with Google’s DeepMind project, has developed an AI system called Student of Games (SoG) that is capable of both beating humans at a variety of games and learning to play new ones. In their paper published in the journal Science Advances, the group describes the new system and its capabilities.

Over the past half-century, and engineers have developed the idea of machine learning and artificial intelligence, in which human-generated data is used to train computer systems. The technology has applications in a variety of scenarios, one of which is playing board and/or parlor games.

Teaching a computer to play a and then improving its capabilities to the degree that it can beat humans has become a milestone of sorts, demonstrating how far artificial intelligence has developed. In this new study, the research team has taken another step toward artificial general intelligence—in which a computer can carry out tasks deemed superhuman.

As an optimist, I believe it will be a catalyst for changes that will help all of us to learn faster and achieve more of our potential.

Focus on the data

Where do I start? Let me start by noting that, in all the conversations about artificial intelligence, very few people are talking about the data. Most people don’t recognize that AI is actually extremely stupid without data. Data is the fuel that shapes the intelligence of AI. Everyone seems to assume that more and more data will be available as AI evolves. But is that assumption valid?

Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.

A new artificial intelligence benchmark called GAIA aims to evaluate whether chatbots like ChatGPT can demonstrate human-like reasoning and competence on everyday tasks.

Created by researchers from Meta, Hugging Face, AutoGPT and GenAI, the benchmark “proposes real-world questions that require a set of fundamental abilities such as reasoning, multi-modality handling, web browsing, and generally tool-use proficiency,” the researchers wrote in a paper published on arXiv.

An important #AI report for breast cancer leading to the potential of sparing chemotherapy for many. The 1st comprehensive analysis of both cancerous and non-cancerous tissue in hundreds of thousands of patient tissues->HiPS score https://nature.com/articles/s41591-023-02643-7


Deep learning enables comprehensive and interpretable scoring for breast cancer prognosis prediction, outperforming pathologists in multicenter validation and providing insight on prognostic biomarkers.