Toggle light / dark theme

Here’s my new article for Newsweek. Give it a read with an open mind! The day of superintelligence is coming, and we can attempt to make sure humans survive by being respectful to AI. This article explores some of my work at Oxford.


The discussion about giving rights to artificial intelligences and robots has evolved around whether they deserve or are entitled to them. Juxtapositions of this with women’s suffrage and racial injustices are often brought up in philosophy departments like the University of Oxford, where I’m a graduate student.

A survey concluded 90 percent of AI experts believe the singularity—a moment when AI becomes so smart, our biological brains can no longer understand it—will happen in this century. A trajectory of AI intelligence growth taken over 25 years and extended at the same rate 50 years forward would pinpoint AI becoming exponentially smarter than humans.

“I never painted dreams. I painted my own reality,” said Frida Kahlo, Mexico’s woman painter known for her many portraits, self-portraits, and works inspired by the nature.

The same cannot be said for the new artist-in-residence FRIDA at Carnegie Mellon University’s Robotics Institute, whose name is inspired by the Mexican artist. FRIDA is not an artist but a robotic arm equipped with a paintbrush that uses artificial intelligence to collaborate with humans on works of art. Just ask FRIDA to paint a picture, and it gets to work putting brush to canvas.

FRIDA stands for Framework and Robotics Initiative for Developing Arts. The project is led by Schaldenbrand with RI faculty members Jean Oh and Jim McCann and has attracted students and researchers across CMU.

Shares of Alphabet slid during the event, suggesting that investors were hoping for more in light of growing competition from Microsoft.

Google’s event took place just one day after Microsoft hosted its own AI event at its headquarters in Redmond, Washington. Microsoft’s event centered around new AI-powered updates to the company’s Bing search engine and Edge browser. Bing, which is a distant second to Google in search, will now allow users get more conversational responses to questions.

Alphabet shares tumbled Wednesday after a Reuters report said an advertisement for Google’s newly unveiled AI chatbot Bard contained an inaccurate answer to a question aimed at showing the newly unveiled tool’s capability.

Shares of the company fell as much as 8.9% to $98.04 the lowest price since January 31 and barely pared the decline heading into afternoon trade.

Reuters reported an ad published by Google on Twitter featuring a GIF video of Bard — which Google CEO Sundar Pichai on Monday introduced as its “experimental AI service” — offered an incorrect response to a question about NASA’s James Webb Space Telescope.

In a recent study published in the Radiology, researchers performed a single-center, open-label randomized controlled trial (RCT) to investigate whether commercial artificial intelligence (AI)-debased computer-aided design (CAD) system could improve the detection rate of actionable lung nodules on chest radiographs.

Actionable nodules are Lung Imaging Reporting and Data System (Lung-RADS), Category 4 nodules. These might be solid nodules larger than eight mm or subsolid nodules with a solid area spanning over six mm.

Studies have not prospectively explored the impact of AI–based CAD software in real-world settings.

The bar seems to be pretty high. But it’s not like ChatGPT is perfect.


Artificial intelligence technology has drawn massive fanfare from investors this year amid the growing popularity of ChatGPT, which launched in November and has helped its maker, OpenAI, nab a staggering $29 billion valuation. Alphabet’s Bard announcement came one day before Microsoft held a press conference to tout an investment in OpenAI that has helped shares of the Silicon Valley staple surge nearly 20% over the past month. “This is just the first step on the AI front,” Ives told clients in a note after the event, reiterating an outperform rating for shares.

Despite the apparent flub, Bank of America analysts have said they’re bullish on Google’s AI strategy, writing in a note to clients that Google is “well prepared with years of investment” in the technology to capture a significant part of the market, particularly since its search engine has a large distribution advantage, as compared to Microsoft. Nevertheless, the analysts warn safety issues including result inaccuracy or bias, disinformation and the potential use of models for harm are key risks.

“AI is the most profound technology we are working on today,” Alphabet CEO Sundar Pichai said as he announced the new chatbot this week.

Humans are innately able to reason about the behaviors of different physical objects in their surroundings. These physical reasoning skills are incredibly valuable for solving everyday problems, as they can help us to choose more effective actions to achieve specific goals.

Some computer scientists have been trying to replicate these reasoning abilities in (AI) , to improve their performance on . So far, however, a reliable approach to train and assess the physical reasoning capabilities of AI algorithms has been lacking.

Cheng Xue, Vimukthini Pinto, Chathura Gamage, and colleagues, a team of researchers at the Australian National University, recently introduced Phy-Q, a new designed to fill this gap in the literature. Their testbed, introduced in a paper in Nature Machine Intelligence, includes a series of scenarios that specifically assess an AI agent’s physical reasoning capabilities.