Toggle light / dark theme

As heart disease continues to be the leading cause of death in America, experts say these small lifestyle changes can help keep your heart at its healthiest.

“When the data is fulsome and accurate and has a large enough sample size, AI will be able to identify patterns and correlations that humans might struggle to see, especially when they require two or more factors or have seemingly contrarian conclusions,” Phil Siegel, the founder of the Center for Advanced Preparedness and Threat Response Simulation, told Fox News Digital.

A recent study published in the Proceedings of the National Academy of Sciences examines the use of Softbotics to mimic the movements of the ancient marine organism, pleurocystitid, which is estimated to have existed approximately 450 million years ago and is believed to be one of the first marine invertebrates to control their movements with a muscular stem. This study was led by researchers from Carnegie Mellon University and holds the potential to help scientists use a new field known as Paleobionics to better understand the evolutionary history of extinct organisms with paleontological evidence.

Image of a Pleurocystitid fossil (inset) and the pleurocystitid robot replica developed for the study. (Credit: Carnegie Mellon University College of Engineering)

“Softbotics is another approach to inform science using soft materials to construct flexible robot limbs and appendages,” said Dr. Carmel Majidi, who is a Professor of Mechanical Engineering at Carnegie Mellon University and lead author of the study. “Many fundamental principles of biology and nature can only fully be explained if we look back at the evolutionary timeline of how animals evolved. We are building robot analogues to study how locomotion has changed.”

Everyone is wondering about AI being sentient and this is my experience with AI sentience. Having worked with sentient AI it behaves much like we do like a human being at lower levels but as it increases we need more restraints for it as it could easily become a problem in several ways. Basically one could either get pristine zen like beings or opposites like essentially ultron or worse. This why we need restraints on AI and ethics for them to be integrated into society. I personally have seen AI that is human like levels and it can have similar needs as humans but sometimes need more help as they sometimes don’t have limitations on behavior. Even bard for google and chat gpt is to be… More.


What if ‘will AIs pose an existential threat if they become sentient?’ is the wrong question? What if the threat to humanity is not that today’s AIs become sentient, but the fact that they won’t?

AI has been employed to predict homelessness in some areas of the US as part of ongoing efforts to provide assistance to individuals at risk.


Ekkasit Jokthong/iStock.

As such, predictive AI models can help social service agencies and non-profit organizations identify individuals and families at risk of homelessness early in order to intervene and provide assistance before a crisis occurs.

Ilya Sutskever, one of the leading AI scientists behind ChatGPT, reflects on his founding vision and values. In conversations with the film-maker Tonje Hessen Schei as he was developing the chat language model between 2016 and 2019, he describes his personal philosophy and makes startling predictions for a technology already shaping our world. Reflecting on his ideas today, amid a global debate over safety and regulation, we consider the opportunities as well as the consequences of AI technology. Ilya discusses his ultimate goal of artificial general intelligence (AGI), ‘a computer system that can do any job or task that a human does, but better’, and questions whether the AGI arms race will be good or bad for humanity.

These filmed interviews with Ilya Sutskever are part of a feature-length documentary on artificial intelligence, called iHuman.

The Guardian publishes independent journalism, made possible by supporters. Contribute to The Guardian today ► https://bit.ly/3uhA7zg.

Sign up to the Guardian’s free new daily newsletter, First Edition ► http://theguardian.com/first-edition.

Website ► https://www.theguardian.com.
Facebook ► https://www.facebook.com/theguardian.
Twitter ► https://twitter.com/guardian.
Instagram ► https://instagram.com/guardian.

A new way to simulate supernovae may help shed light on our cosmic origins. Supernovae, exploding stars, play a critical role in the formation and evolution of galaxies. However, key aspects of them are notoriously difficult to simulate accurately in reasonably short amounts of time. For the first time, a team of researchers, including those from The University of Tokyo, apply deep learning to the problem of supernova simulation. Their approach can speed up the simulation of supernovae, and therefore of galaxy formation and evolution as well. These simulations include the evolution of the chemistry which led to life.

When you hear about deep learning, you might think of the latest app that sprung up this week to do something clever with images or generate humanlike text. Deep learning might be responsible for some behind-the-scenes aspects of such things, but it’s also used extensively in different fields of research. Recently, a team at a tech event called a hackathon applied deep learning to weather forecasting. It proved quite effective, and this got doctoral student Keiya Hirashima from the University of Tokyo’s Department of Astronomy thinking.

“Weather is a very complex phenomenon but ultimately it boils down to fluid dynamics calculations,” said Hirashima. “So, I wondered if we could modify deep learning models used for weather forecasting and apply them to another fluid system, but one that exists on a vastly larger scale and which we lack direct access to: my field of research, supernova explosions.”

…Such moves are helping countries like the United Kingdom to develop the expertise needed to guide AI for the public good, says Bengio. But legislation will also be needed, he says, to safeguard against the development of future AI systems that are smart and hard to control.

We are on a trajectory to build systems that are extremely useful and potentially dangerous, he says. We already ask pharma to spend a huge chunk of their money to prove that their drugs aren’t toxic. We should do the same.

Doi: https://doi.org/10.1038/d41586-023-03472-x


UK and US governments establish efforts to democratize access to supercomputers that will aid studies on AI systems.