Toggle light / dark theme

Globalization is not dead, but it is changing. The United States and China are creating two separate spheres for technology, and artificial intelligence is on the front lines of this new “Digital Cold War.” If democracies want to succeed in this new era of “re-globalization” they will need to coordinate across governments and between the private and public sectors. AI is coming, whether we like it or not. We are at a fork in the road and all segments of society will need to pitch in to build AI systems that contribute to a just and democratic future where humans can thrive.

Page-utils class= article-utils—vertical hide-for-print data-js-target= page-utils data-id= tag: blogs.harvardbusiness.org, 2007/03/31:999.362544 data-title= AI and the New Digital Cold War data-url=/2023/09/ai-and-the-new-digital-cold-war data-topic= Public-private partnerships data-authors= Hemant Taneja; Fareed Zakaria data-content-type= Digital Article data-content-image=/resources/images/article_assets/2023/08/Sep23_02_792DVvbiBBo-383x215.jpg data-summary=

Companies and countries need to prioritize collaboration and transformation over competition and disruption.

This microbot has the adeptness to navigate precisely within clusters of cells.

In recent years, introducing tiny robots into biological studies and therapeutic delivery has generated significant excitement and is poised to revolutionize the medical field.

These mini robotic systems, often measuring just a few millimeters or even smaller, bring various capabilities and advantages, transforming multiple aspects of medicine, including targeting precise tumor sites to deliver drugs, cellular simulation, and even performing microsurgery.

Artificial intelligence not only affords impressive performance, but also creates significant demand for energy. The more demanding the tasks for which it is trained, the more energy it consumes.

Víctor López-Pastor and Florian Marquardt, two scientists at the Max Planck Institute for the Science of Light in Erlangen, Germany, present a method by which could be trained much more efficiently. Their approach relies on instead of the digital currently used. The work is published in the journal Physical Review X.

The amount of energy required to train GPT-3, which makes ChatGPT an eloquent and apparently well-informed Chatbot, has not been revealed by Open AI, the company behind that artificial intelligence (AI). According to the German statistics company Statista, this would require 1,000 megawatt hours—about as much as 200 German households with three or more people consume annually. While this has allowed GPT-3 to learn whether the word “deep” is more likely to be followed by the word “sea” or “learning” in its , by all accounts it has not understood the underlying meaning of such phrases.

It’s easy to trick the large language models powering chatbots like OpenAI’s ChatGPT and Google’s Bard. In one experiment in February, security researchers forced Microsoft’s Bing chatbot to behave like a scammer. Hidden instructions on a web page the researchers created told the chatbot to ask the person using it to hand over their bank account details. This kind of attack, where concealed information can make the AI system behave in unintended ways, is just the beginning.

Hundreds of examples of “indirect prompt injection” attacks have been created since then. This type of attack is now considered one of the most concerning ways that language models could be abused by hackers. As generative AI systems are put to work by big corporations and smaller startups, the cybersecurity industry is scrambling to raise awareness of the potential dangers. In doing so, they hope to keep data—both personal and corporate—safe from attack. Right now there isn’t one magic fix, but common security practices can reduce the risks.

“Indirect prompt injection is definitely a concern for us,” says Vijay Bolina, the chief information security officer at Google’s DeepMind artificial intelligence unit, who says Google has multiple projects ongoing to understand how AI can be attacked. In the past, Bolina says, prompt injection was considered “problematic,” but things have accelerated since people started connecting large language models (LLMs) to the internet and plug-ins, which can add new data to the systems. As more companies use LLMs, potentially feeding them more personal and corporate data, things are going to get messy. “We definitely think this is a risk, and it actually limits the potential uses of LLMs for us as an industry,” Bolina says.

Robotaxi company Cruise is “just days away” from getting regulatory approval that would pave the way for mass production of its purpose-built driverless vehicle, CEO Kyle Vogt said on Thursday in comments reported by the Detroit Free Press.

General Motors-backed Cruise unveiled the vehicle — called Origin — in early 2020, presenting the kind of driverless car that we all dreamed of when R&D in the sector kicked off years ago; a vehicle without a steering wheel and without pedals. A vehicle with passenger seats only.

The startup, one of very few woman-led AI unicorns, has a $1 billion valuation and access to 10,000 Nvidia H100 GPUs, but its founders say it could be years away from revealing a product.

It was at a San Francisco party hosted by Greg Brockman nearly a decade ago — around the time the Stripe CTO would leave to co-found an AI research lab called OpenAI — that entrepreneurs Kanjun Qiu and Josh Albrecht met a crypto mogul named Jed McCaleb.

As Qiu and Albrecht launched two now-defunct startups, and McCaleb became a regular at The Archive, their group house for founders and AI researchers near Dolores Park, they talked periodically about launching their own AI lab. Several of their housemates had… More.


Imbue has a $1 billion valuation and access to 10,000 Nvidia H100 GPUs, but its founders Kanjun Qiu and Josh Albrecht say it could be years away from revealing a product.

Alice Parker is working on how to mimic disorders such as schizophrenia.

People have expected great things from Alice Parker, who was raised in a family of distinguished scientists and engineers. And Parker, emerita professor of electrical and computer engineering at the University of Southern California has delivered. She helped develop high-level (behavioral) synthesis, an automated computer design process that assists with the transformation of a behavioral description of hardware into a model of its logic and memory circuits.

Her father, a chemist, was on the team that first synthesized vitamin B1 at pharmaceutical company Merck in New Jersey. In 1941 her uncle Edward Wenk Jr., was… More.

Chief scientist Bill Dally explains the 4 ingredients that brought Nvidia so far.

Nvidia is riding high at the moment. The company has managed to increase the performance of its chips on AI tasks a thousandfold over the past 10 years, it’s raking in money, and it’s reportedly very hard to get your hands on its newest AI-accelerating GPU, the H100.

How did Nvidia get here? The company’s chief scientist, Bill Dally, managed to sum it all up in a single slide during his keynote address to the IEEE’s Hot Chips 2023 symposium in Silicon Valley on high-performance microprocessors last week. Moore’s Law was a surprisingly small part of Nvidia’s magic and new number formats a very large part. Put it… More.