Toggle light / dark theme

The “Beneficial AGI Summit & Unconference” is a new event organized by SingularityNet and TrueAGI in collaboration with others. The Millennium Project is one of the sponsors of the event and our Jerome Glenn, Executive Director and co-founder of The Millennium Project, and José Cordeiro, MP Board member and RIBER and Venezuela Nodes Chair, are members of the organizing committee of the event. The Beneficial AGI summit will take place both online and physically and c/o Hilton Panama in Panama City. The streaming is free, get your ticket.

The objective of the conference is to bring together the leading voices in AI in actions to catalyze the emergence of beneficial AGI. Key themes of the event are: Constitution & Governance Framework, Global Brain Collective, Simulation / Gaming Environments, Scenarios analysis process, Potential scenarios (from 1 to 7).

On the first two days of the BGI Summit, Feb. 27–28, top thought leaders from around the globe will engage in comprehensive, detailed discussions of a wide range of questions regarding various approaches to AGI and their ethical, economic, psychological, political, environmental and other implications. The focus will be on discussing issues, making conceptual progress, forming collaborations, and engaging in the practical actions aimed at catalyzing the emergence of beneficial AGI based on the ideas and connections set in motion by all involved.

Just after filming this video, Sam Altman, CEO of OpenAI published a blog post about the governance of superintelligence in which he, along with Greg Brockman and Ilya Sutskever, outline their thinking about how the world should prepare for a world with superintelligences. And just before filming Geoffrey Hinton quite his job at Google so that he could express more openly his concerns about the imminent arrival of an artificial general intelligence, an AGI that could soon get beyond our control if it became superintelligent. So, the basic idea is moving from sci-fi speculation into being a plausible scenario, but how powerful will they be and which of the concerns about superAI are reasonably founded? In this video I explore the ideas around superintelligence with Nick Bostrom’s 2014 book, Superintelligence, as one of our guides and Geoffrey Hinton’s interviews as another, to try to unpick which aspects are plausible and which are more like speculative sci-fi. I explore what are the dangers, such as Eliezer Yudkowsky’s notion of a rapid ‘foom’ take over of humanity, and also look briefly at the control problem and the alignment problem. At the end of the video I then make a suggestion for how we could maybe delay the arrival of superintelligence by withholding the ability of the algorithms to self-improve themselves, withholding what you could call, meta level agency.

▬▬ Chapters ▬▬

00:00 — Questing for an Infinity Gauntlet.
01:38 — Just human level AGI
02:27 — Intelligence explosion.
04:10 — Sparks of AGI
04:55 — Geoffrey Hinton is concerned.
06:14 — What are the dangers?
10:07 — Is ‘foom’ just sci-fi?
13:07 — Implausible capabilities.
14:35 — Plausible reasons for concern.
15:31 — What can we do?
16:44 — Control and alignment problems.
18:32 — Currently no convincing solutions.
19:16 — Delay intelligence explosion.
19:56 — Regulating meta level agency.

▬▬ Other videos about AI and Society ▬▬

AI wants your job | Which jobs will AI automate? | Reports by OpenAI and Goldman Sachs.
• Which jobs will AI automate? | Report…

How ChatGPT Works (a non technical explainer):

Businesses must also ensure they are prepared for forthcoming regulations. President Biden signed an executive order to create AI safeguards, the U.K. hosted the world’s first AI Safety Summit, and the EU brought forward their own legislation. Governments across the globe are alive to the risks. C-suite leaders must be too — and that means their generative AI systems must adhere to current and future regulatory requirements.

So how do leaders balance the risks and rewards of generative AI?

Businesses that leverage three principles are poised to succeed: human-first decision-making, robust governance over large language model (LLM) content, and a universal connected AI approach. Making good choices now will allow leaders to future-proof their business and reap the benefits of AI while boosting the bottom line.

The 323.6-meter-long ship is like a floating city.


China is set to make maritime history as the Adora Magic City, the nation’s first domestically built cruise ship, prepares for its maiden voyage from Shanghai on January 1.

Operated by CSSC Carnival Cruise Shipping, a joint venture between China State Shipbuilding Corp. and Carnival Corp. from the US, this 323.6-meter-long marvel is not just a cruise ship; it’s a floating city designed to offer a taste of home to Chinese travelers while venturing overseas.

Luxury on the waves: Features of Adora Magic City

The Adora Magic City symbolizes China’s shipbuilding prowess, boasting 16 decks and accommodating up to 5,246 passengers in 2,125 guest rooms. The ship’s impressive features include 22 restaurants and bars, a mahjong lounge, a beer brewery, a hotpot outlet, duty-free shops, and theaters showcasing Chinese-themed musicals.

Juan Bernabé-Moreno is IBM’s director of research for Ireland and the United Kingdom. The Spanish computer scientist is also responsible for IBM’s climate and sustainability strategy, which is being developed by seven global laboratories using artificial intelligence (AI) and quantum computing. He believes quantum computing is better suited to understanding nature and matter than classical or traditional computers.

Question. Is artificial intelligence a threat to humanity?

Answer. Artificial intelligence can be used to cause harm, but it’s crucial to distinguish between intentional and malicious use of AI, and unintended behavior due to lack of data control or governance rigor.

“The world isn’t doing terribly well in averting global ecological collapse,” says Dr. Florian Rabitz, a chief researcher at Kaunas University of Technology (KTU), Lithuania, the author of a new monograph, “Transformative Novel Technologies and Global Environmental Governance,” recently published by Cambridge University Press.

Greenhouse gas emissions, species extinction, ecosystem degradation, chemical pollution, and more are threatening the Earth’s future. Despite decades of international agreements and countless high-level summits, success in forestalling this existential crisis has remained elusive, says Dr. Rabitz.

In his new monograph, the KTU researcher delves into the intersection of cutting-edge technological solutions and the global environmental crisis. The author explores how international institutions respond (or fail to respond) to high-impact technologies that have been the subject of extensive debate and controversy.

Good technologies disappear.


In the company’s cloud market study, almost all organizations say that security, reliability and disaster recovery are important considerations in their AI strategy. Also key is the need to manage and support AI workloads at scale. In the area of AI data rulings and regulation, many firms think that AI data governance requirements will force them to more comprehensively understand and track data sources, data age and other key data attributes.

“AI technologies will drive the need for new backup and data protection solutions,” said Debojyoti ‘Debo’ Dutta, vice president of engineering for AI at Nutanix. “[Many companies are] planning to add mission-critical, production-level data protection and Disaster Recovery (DR) solutions to support AI data governance. Security professionals are racing to use AI-based solutions to improve threat and anomaly detection, prevention and recovery while bad actors race to use AI-based tools to create new malicious applications, improve success rates and attack surfaces, and improve detection avoidance.”

While it’s fine to ‘invent’ gen-AI, putting it into motion evidently means thinking about its existence as a cloud workload in and of itself. With cloud computing still misunderstood in some quarters and the cloud-native epiphany not shared by every company, considering the additional strains (for want of a kinder term) that gen-AI puts on the cloud should make us think about AI as a cloud workload more directly and consider how we run it.