Toggle light / dark theme

CSIS will host a public event on responsible AI in a global context, featuring a moderated discussion with Julie Sweet, Chair and CEO of Accenture, and Brad Smith, President and Vice Chair of the Microsoft Corporation, on the business perspective, followed by a conversation among a panel of experts on the best way forward for AI regulation. Dr. John J. Hamre, President and CEO of CSIS, will provide welcoming remarks.

Keynote Speakers:
Brad Smith, President and Vice Chair, Microsoft Corporation.
Julie Sweet, Chair and Chief Executive Officer, Accenture.

Featured Speakers:
Gregory C. Allen, Director, Project on AI Governance and Senior Fellow, Strategic Technologies Program, CSIS
Mignon Clyburn, Former Commissioner, U.S. Federal Communications Commission.
Karine Perset, Head of AI Unit and OECD.AI, Digital Economy Policy Division, Organisation for Economic Co-Operation and Development (OECD)
Helen Toner, Director of Strategy, Center for Security and Emerging Technology, Georgetown University.

This event is made possible through general support to CSIS.

There are ways around this, but they don’t have the exciting scalability story and worse, they have to rely on a rather non-tech crutch: human input. Smaller language models fine-tuned with actual human-written answers are ultimately better at generating less biased text than a much larger, more powerful system.

And further complicating matters is that models like OpenAI’s GPT-3 don’t always generate text that’s particularly useful because they’re trained to basically “autocomplete” sentences based on a huge trove of text scraped from the internet. They have no knowledge of what a user is asking it to do and what responses they are looking for. “In other words, these models aren’t aligned with their users,” OpenAI said.

Any test of this idea would be to see what happens with pared-down models and a little human input to keep those trimmed neural networks more…humane. This is exactly what OpenAI did with GPT-3 recently when it contracted 40 human contractors to help steer the model’s behavior.

Our next challenge, then, was to determine the evolutionary connections between these genes. The more similar the two genes were, the more likely viruses with those genes were closely related. Because these sequences had evolved so long ago (possibly predating the first cell), the genetic signposts indicating where new viruses may have split off from a common ancestor had been lost to time. A form of artificial intelligence called machine learning, however, allowed us to systematically organize these sequences and detect differences more objectively than if the task were done manually.

We identified a total of 5,504 new marine RNA viruses and doubled the number of known RNA virus phyla from five to 10. Mapping these new sequences geographically revealed that two of the new phyla were particularly abundant across vast oceanic regions, with regional preferences in either temperate and tropical waters (the Taraviricota, named after the Tara Oceans expeditions) or the Arctic Ocean (the Arctiviricota).

MIT has launched a new academia and industry partnership called the AI Hardware Program that aims to boost research and development.


“A sharp focus on AI hardware manufacturing, research, and design is critical to meet the demands of the world’s evolving devices, architectures, and systems,” says Anantha Chandrakasan, dean of the MIT School of Engineering, and Vannevar Bush Professor of Electrical Engineering and Computer Science.

Microsoft and Hewlett Packard Enterprise (HSE) are working with NASA scientists to develop an AI system for inspecting astronauts’ gloves.

Space is an unforgiving environment and equipment failures can be catastrophic. Gloves are particularly prone to wear and tear as they’re used for just about everything, including repairing equipment and installing new equipment.

Currently, astronauts will send back images of their gloves to Earth to be manually examined by NASA analysts.

The TechCrunch Global Affairs Project examines the increasingly intertwined relationship between the tech sector and global politics.

Geopolitical actors have always used technology to further their goals. Unlike other technologies, artificial intelligence (AI) is far more than a mere tool. We do not want to anthropomorphize AI or suggest that it has intentions of its own. It is not — yet — a moral agent. But it is fast becoming a primary determinant of our collective destiny. We believe that because of AI’s unique characteristics — and its impact on other fields, from biotechnologies to nanotechnologies — it is already threatening the foundations of global peace and security.

The rapid rate of AI technological development, paired with the breadth of new applications (the global AI market size is expected to grow more than ninefold from 2020 to 2028) means AI systems are being widely deployed without sufficient legal oversight or full consideration of their ethical impacts. This gap, often referred to as the pacing problem, has left legislatures and executive branches simply unable to cope.