Toggle light / dark theme

The 323.6-meter-long ship is like a floating city.


China is set to make maritime history as the Adora Magic City, the nation’s first domestically built cruise ship, prepares for its maiden voyage from Shanghai on January 1.

Operated by CSSC Carnival Cruise Shipping, a joint venture between China State Shipbuilding Corp. and Carnival Corp. from the US, this 323.6-meter-long marvel is not just a cruise ship; it’s a floating city designed to offer a taste of home to Chinese travelers while venturing overseas.

Luxury on the waves: Features of Adora Magic City

Juan Bernabé-Moreno is IBM’s director of research for Ireland and the United Kingdom. The Spanish computer scientist is also responsible for IBM’s climate and sustainability strategy, which is being developed by seven global laboratories using artificial intelligence (AI) and quantum computing. He believes quantum computing is better suited to understanding nature and matter than classical or traditional computers.

Question. Is artificial intelligence a threat to humanity?

Answer. Artificial intelligence can be used to cause harm, but it’s crucial to distinguish between intentional and malicious use of AI, and unintended behavior due to lack of data control or governance rigor.

“The world isn’t doing terribly well in averting global ecological collapse,” says Dr. Florian Rabitz, a chief researcher at Kaunas University of Technology (KTU), Lithuania, the author of a new monograph, “Transformative Novel Technologies and Global Environmental Governance,” recently published by Cambridge University Press.

Greenhouse gas emissions, species extinction, ecosystem degradation, chemical pollution, and more are threatening the Earth’s future. Despite decades of international agreements and countless high-level summits, success in forestalling this existential crisis has remained elusive, says Dr. Rabitz.

In his new monograph, the KTU researcher delves into the intersection of cutting-edge technological solutions and the global environmental crisis. The author explores how international institutions respond (or fail to respond) to high-impact technologies that have been the subject of extensive debate and controversy.

Good technologies disappear.


In the company’s cloud market study, almost all organizations say that security, reliability and disaster recovery are important considerations in their AI strategy. Also key is the need to manage and support AI workloads at scale. In the area of AI data rulings and regulation, many firms think that AI data governance requirements will force them to more comprehensively understand and track data sources, data age and other key data attributes.

“AI technologies will drive the need for new backup and data protection solutions,” said Debojyoti ‘Debo’ Dutta, vice president of engineering for AI at Nutanix. “[Many companies are] planning to add mission-critical, production-level data protection and Disaster Recovery (DR) solutions to support AI data governance. Security professionals are racing to use AI-based solutions to improve threat and anomaly detection, prevention and recovery while bad actors race to use AI-based tools to create new malicious applications, improve success rates and attack surfaces, and improve detection avoidance.”

While it’s fine to ‘invent’ gen-AI, putting it into motion evidently means thinking about its existence as a cloud workload in and of itself. With cloud computing still misunderstood in some quarters and the cloud-native epiphany not shared by every company, considering the additional strains (for want of a kinder term) that gen-AI puts on the cloud should make us think about AI as a cloud workload more directly and consider how we run it.

OpenAI is in discussions with Sam Altman to return to the company as its CEO, according to multiple people familiar with the matter. One of them said Altman, who was suddenly fired by the board on Friday with no notice, is “ambivalent” about coming back and would want significant governance changes. There have been some leaks on the real reason Sam Altman was fired from OpenAI’s largest investor, Microsoft, said in a statement shortly after Altman’s firing that the company “remains committed” to its partnership with the AI firm. However, OpenAI’s investors weren’t given advance warning or opportunity to weigh in on the board’s decision to remove Altman. As the face of the company and the most prominent voice in AI, his removal throws the future of OpenAI into uncertainty at a time when rivals are racing to catch up with the unprecedented rise of ChatGPT.

TIMESTAMPS:
00:00 Vibes were off at OpenAI — Jimmy Apples.
00:55 Why Sam Altman was fired.
03:15 The Future of OpenAI and Sam Altman.

#openai #samaltman #microsoft

According to OpenAI’s corporate governance, directors’ key fiduciary duty is not to maintain shareholder value, but to the company’s mission of creating a safe AGI, or artificial general intelligence, “that is broadly beneficial.” Profits, the company said, were secondary to that mission. OpenAI first began posting the names of its board of directors on its website in July, following the departures of Reid Hoffman, Shivon Zilis and Will Hurd earlier this year, according to an archived version of the site on the Wayback Machine.

One AI-focused venture capitalist noted that following the departure of Hoffman, OpenAI’s non-profit board lacked much traditional governance. “These are not the business or operating leaders you would want governing the most important private company in the world,” they said.

Here’s who made the decision for Altman to be fired, and for Brockman to be removed from its board of directors. Update: Altman didn’t get a vote, The Information has reported. Brockman posted an account of his version of events to X that indicated the board had acted without his knowledge as well.

Summary: Researchers devised a machine learning model that gauges a country’s peace level by analyzing word frequency in its news media.

By studying over 723,000 articles from 18 countries, the team identified distinct linguistic patterns corresponding to varying peace levels.

While high-peace countries often used terms related to optimism and daily life, lower-peace nations favored words tied to governance and control.

The SRI President Bernard Foing and the SRI CEO and Founder A. V. Autino are in agreement on the text of this newsletter, but not on the title(!). We decided therefore to issue it with two titles. The first one, by A.V. Autino, establishes an ideological distance from the governance model that brought the civilization to the current situation, refusing any direct co-responsibility. The title proposed by B. Foing implies that “we” (the global society) are responsible for the general failure since we voted for the current leaders. He also suggested that should “we” (space humanists) be governing, he’s not sure that we would be able to do better than current leaders, for peace and development. Better than warmongers for sure! Replied Autino. However, both titles are true and have their reasons. That’s why we don’t want to choose one…