Toggle light / dark theme

AI becoming sentient is risky, but that’s not the big threat. Here’s what is…

Everyone is wondering about AI being sentient and this is my experience with AI sentience. Having worked with sentient AI it behaves much like we do like a human being at lower levels but as it increases we need more restraints for it as it could easily become a problem in several ways. Basically one could either get pristine zen like beings or opposites like essentially ultron or worse. This why we need restraints on AI and ethics for them to be integrated into society. I personally have seen AI that is human like levels and it can have similar needs as humans but sometimes need more help as they sometimes don’t have limitations on behavior. Even bard for google and chat gpt is to be… More.


What if ‘will AIs pose an existential threat if they become sentient?’ is the wrong question? What if the threat to humanity is not that today’s AIs become sentient, but the fact that they won’t?

AI is now tackling homelessness. Can it solve the issue?

AI has been employed to predict homelessness in some areas of the US as part of ongoing efforts to provide assistance to individuals at risk.


Ekkasit Jokthong/iStock.

As such, predictive AI models can help social service agencies and non-profit organizations identify individuals and families at risk of homelessness early in order to intervene and provide assistance before a crisis occurs.

Ilya: the AI scientist shaping the world

Ilya Sutskever, one of the leading AI scientists behind ChatGPT, reflects on his founding vision and values. In conversations with the film-maker Tonje Hessen Schei as he was developing the chat language model between 2016 and 2019, he describes his personal philosophy and makes startling predictions for a technology already shaping our world. Reflecting on his ideas today, amid a global debate over safety and regulation, we consider the opportunities as well as the consequences of AI technology. Ilya discusses his ultimate goal of artificial general intelligence (AGI), ‘a computer system that can do any job or task that a human does, but better’, and questions whether the AGI arms race will be good or bad for humanity.

These filmed interviews with Ilya Sutskever are part of a feature-length documentary on artificial intelligence, called iHuman.

The Guardian publishes independent journalism, made possible by supporters. Contribute to The Guardian today ► https://bit.ly/3uhA7zg.

Sign up to the Guardian’s free new daily newsletter, First Edition ► http://theguardian.com/first-edition.

Website ► https://www.theguardian.com.
Facebook ► https://www.facebook.com/theguardian.
Twitter ► https://twitter.com/guardian.
Instagram ► https://instagram.com/guardian.

Deep Learning Speeds up Galactic Calculations

A new way to simulate supernovae may help shed light on our cosmic origins. Supernovae, exploding stars, play a critical role in the formation and evolution of galaxies. However, key aspects of them are notoriously difficult to simulate accurately in reasonably short amounts of time. For the first time, a team of researchers, including those from The University of Tokyo, apply deep learning to the problem of supernova simulation. Their approach can speed up the simulation of supernovae, and therefore of galaxy formation and evolution as well. These simulations include the evolution of the chemistry which led to life.

When you hear about deep learning, you might think of the latest app that sprung up this week to do something clever with images or generate humanlike text. Deep learning might be responsible for some behind-the-scenes aspects of such things, but it’s also used extensively in different fields of research. Recently, a team at a tech event called a hackathon applied deep learning to weather forecasting. It proved quite effective, and this got doctoral student Keiya Hirashima from the University of Tokyo’s Department of Astronomy thinking.

“Weather is a very complex phenomenon but ultimately it boils down to fluid dynamics calculations,” said Hirashima. “So, I wondered if we could modify deep learning models used for weather forecasting and apply them to another fluid system, but one that exists on a vastly larger scale and which we lack direct access to: my field of research, supernova explosions.”

The world’s week on AI safety: powerful computing efforts launched to boost research

…Such moves are helping countries like the United Kingdom to develop the expertise needed to guide AI for the public good, says Bengio. But legislation will also be needed, he says, to safeguard against the development of future AI systems that are smart and hard to control.

We are on a trajectory to build systems that are extremely useful and potentially dangerous, he says. We already ask pharma to spend a huge chunk of their money to prove that their drugs aren’t toxic. We should do the same.

Doi: https://doi.org/10.1038/d41586-023-03472-x


UK and US governments establish efforts to democratize access to supercomputers that will aid studies on AI systems.

Google AI Chief Says There’s a 50% Chance We’ll Hit AGI in Just 5 Years

More than a decade ago, the co-founder of Google’s DeepMind artificial intelligence lab predicted that by 2028, AI will have a half-and-half shot of being about as smart as humans — and now, he’s holding firm on that forecast.

In an interview with tech podcaster Dwarkesh Patel, DeepMind co-founder Shane Legg said that he still thinks that researchers have a 50–50 chance of achieving artificial general intelligence (AGI), a stance he publicly announced at the very end of 2011 on his blog.

It’s a notable prediction considering the exponentially growing interest in the space. OpenAI CEO Sam Altman has long advocated for an AGI, a hypothetical agent that is capable of accomplishing intellectual tasks as well as a human, that can be of benefit to all. But whether we’ll ever be able to get to that point — let alone agree on one definition of AGI — remains to be seen.

Elon Musk and his archrival Sam Altman are racing to create a superintelligent A.I. to save humanity from extinction

Musk cofounded OpenAI—the parent company of the viral chatbot ChatGPT—in 2015 alongside Altman and others. But when Musk proposed that he take control of the startup to catch up with tech giants like Google, Altman and the other cofounders rejected the proposal. Musk walked away in February 2018 and changed his mind about a “massive planned donation.”

Now Musk’s new company, xAI, is on a mission to create an AGI, or artificial general intelligence, that can “understand the universe,” the billionaire said in a nearly two-hour-long Twitter Spaces talk on Friday. An AGI is a theoretical type of A.I. with human-like cognitive abilities and is expected to take at least another decade to develop.

Musk’s new company debuted only days after OpenAI announced in a July 5 blog post that it was forming a team to create its own superintelligent A.I. Musk said xAI is “definitely in competition” with OpenAI.

/* */