AGI (or Artificial General Intelligence) is something (in my view) everyone should know about and think about.
Category: robotics/AI – Page 762


How AI Became A Cloud ‘Workload’
Good technologies disappear.
In the company’s cloud market study, almost all organizations say that security, reliability and disaster recovery are important considerations in their AI strategy. Also key is the need to manage and support AI workloads at scale. In the area of AI data rulings and regulation, many firms think that AI data governance requirements will force them to more comprehensively understand and track data sources, data age and other key data attributes.
“AI technologies will drive the need for new backup and data protection solutions,” said Debojyoti ‘Debo’ Dutta, vice president of engineering for AI at Nutanix. “[Many companies are] planning to add mission-critical, production-level data protection and Disaster Recovery (DR) solutions to support AI data governance. Security professionals are racing to use AI-based solutions to improve threat and anomaly detection, prevention and recovery while bad actors race to use AI-based tools to create new malicious applications, improve success rates and attack surfaces, and improve detection avoidance.”
While it’s fine to ‘invent’ gen-AI, putting it into motion evidently means thinking about its existence as a cloud workload in and of itself. With cloud computing still misunderstood in some quarters and the cloud-native epiphany not shared by every company, considering the additional strains (for want of a kinder term) that gen-AI puts on the cloud should make us think about AI as a cloud workload more directly and consider how we run it.
General Motors CEO Mary Barra Tries To Reassure Cruise Staff After Cofounders’ Resignations
In an all-hands video conference call to Cruise staff on Monday afternoon, General Motors CEO Mary Barra attempted to re-energize the staff of Cruise, GM’s on-edge autonomous vehicle subsidiary, after its CEO and chief product officers both resigned following several weeks of enormous setbacks for the company.



‘Hallucinate’ chosen as Cambridge dictionary’s word of the year
The original definition of the chosen word is to “seem to see, hear, feel, or smell” something that does not exist, usually because of “a health condition or because you have taken a drug”
The psychological verb gained an extra meaning in 2023 that ‘gets to the heart of why people are talking about artificial intelligence’

Sam Altman, OpenAI Board Open Talks to Negotiate His Possible Return
Sam Altman and the OpenAI board are now in talks for his possible return, specifically he’s speaking with Adam d’Angelo.
Sam Altman and members of the OpenAI board have opened negotiations aimed at a possible return of the ousted co-founder and chief executive officer to the artificial intelligence company, according to people with knowledge of the matter.
Discussions are happening between Altman and at least one board member, Adam D’Angelo, said the people, who asked not to be identified because the deliberations are private and they may not come to fruition. The talks also involve some of OpenAI’s investors, many of whom are pushing for his reinstatement, one of the people said.
In one scenario being discussed, Altman would return as a director on a transitional board, one of the people said. Former Salesforce Inc. co-CEO Bret Taylor could also serve as a director on a new board, multiple people said.

Researchers seek consensus on what constitutes Artificial General Intelligence
A team of researchers at DeepMind focusing on the next frontier of artificial intelligence—Artificial General Intelligence (AGI)—realized they needed to resolve one key issue first. What exactly, they asked, is AGI?
It is often viewed in general as a type of artificial intelligence that possesses the ability to understand, learn and apply knowledge across a broad range of tasks, operating like the human brain. Wikipedia broadens the scope by suggesting AGI is “a hypothetical type of intelligent agent [that] could learn to accomplish any intellectual task that human beings or animals can perform.”
OpenAI’s charter describes AGI as a set of “highly autonomous systems that outperform humans at most economically valuable work.”

New research maps 14 potential evolutionary dead ends for humanity and ways to avoid them
Humankind on the verge of evolutionary traps, a new study: …For the first time, scientists have used the concept of evolutionary traps on human societies at large.
For the first time, scientists have used the concept of evolutionary traps on human societies at large. They find that humankind risks getting stuck in 14 evolutionary dead ends, ranging from global climate tipping points to misaligned artificial intelligence, chemical pollution, and accelerating infectious diseases.
The evolution of humankind has been an extraordinary success story. But the Anthropocene—the proposed geological epoch shaped by us humans—is showing more and more cracks. Multiple global crises, such as the COVID-19 pandemic, climate change, food insecurity, financial crises, and conflicts have started to occur simultaneously in something which scientists refer to as a polycrisis.
Humans are incredibly creative as a species. We are able to innovate and adapt to many circumstances and can cooperate on surprisingly large scales. But these capabilities turn out to have unintentional consequences. Simply speaking, you could say that the human species has been too successful and, in some ways, too smart for its own future good, says Peter Søgaard Jørgensen, researcher at the Stockholm Resilience Center at Stockholm University and at the Royal Swedish Academy of Sciences’ Global Economic Dynamics and the Biosphere program and Anthropocene laboratory.