Toggle light / dark theme

OpenAI’s Altman Clinches Deal With Kakao, Second Major Asian Alliance This Week

In today’s AI news, OpenAI said on Tuesday it will develop artificial intelligence products for South Korea with chat app operator Kakao. In a whirlwind tour through Asia, OpenAI Chief Executive Sam Altman is also scheduled to visit India on Wednesday where he is seeking to meet Prime Minister Narendra Modi.

In other advancements, Tana is emerging from stealth, announcing $25 million in funding from an interesting list of backers to get started. Tana is part automated-list builder and note taker, part application enabler, and part organizer. It can listen to conversations or voice memos directed to Tana itself, transcribing them and turns them into action items.

Then, OpenAI filed a new application to trademark products associated with its brand — “OpenAI” — with the USPTO. Normally, this wouldn’t be newsworthy. Companies file for trademarks all the time. But in the application, OpenAI hints at new product lines both nearer-term and of a more speculative nature.

And, a South Korean startup called Cinamon is ramping up efforts to claim a part of this burgeoning market — it recently raised an $8.5 million Series B round to continue building its animated video generation platform “CINEV,” slated to be launched in beta in the first half of 2025.

In videos, watch World Wide Technology Co-Founder and CEO Jim Kavanaugh and NVIDIA Founder and CEO Jensen Huang talk about the evolution and future of AI. During the discussion, Jim and Jensen will also provide practical tips for implementing AI at scale within the enterprise.

Then, billionaire SoftBank founder Masayoshi Son and OpenAI chief Sam Altman took to a Tokyo stage Monday to outline their 50–50 collaboration. The venture, which will operate under SoftBank’s telecoms arm, will hire 1,000 people from SoftBank to market OpenAI products to industries from carmakers to retailers.

And, Jerrod Lew is back with another demonstration of Google’s Veo 2 video generator. In this episode Jerrod demonstrates Veo 2’s awesome ability to create videos of food being cooked and served. I imagine Google’s experience with YouTube makes Veo 2 a really good choice when it comes to video creation.

The Goodness of the Universe

Outer Space, Inner Space, and the Future of Networks.
Synopsis: Does the History, Dynamics, and Structure of our Universe give any evidence that it is inherently “Good”? Does it appear to be statistically protective of adapted complexity and intelligence? Which aspects of the big history of our universe appear to be random? Which are predictable? What drives universal and societal accelerating change, and why have they both been so stable? What has developed progressively in our universe, as opposed to merely evolving randomly? Will humanity’s future be to venture to the stars (outer space) or will we increasingly escape our physical universe, into physical and virtual inner space (the transcension hypothesis)? In Earth’s big history, what can we say about what has survived and improved? Do we see any progressive improvement in humanity’s thoughts or actions? When is anthropogenic risk existential or developmental (growing pains)? In either case, how can we minimize such risk? What values do well-built networks have? What can we learn about the nature of our most adaptive complex networks, to improve our personal, team, organizational, societal, global, and universal futures? I’ll touch on each of these vital questions, which I’ve been researching and writing about since 1999, and discussing with a community of scholars at Evo-Devo Universe (join us!) since 2008.

For fun background reading, see John’s Goodness of the Universe post on Centauri Dreams, and “Evolutionary Development: A Universal Perspective”, 2019.

John writes about Foresight Development (personal, team, organizational, societal, global, and universal), Accelerating Change, Evolutionary Development (Evo-Devo), Complex Adaptive Systems, Big History, Astrobiology, Outer and Inner Space, Human-Machine Merger, the Future of AI, Neuroscience, Mind Uploading, Cryonics and Brain Preservation, Postbiological Life, and the Values of Well-Built Networks.
He is CEO of Foresight University, founder of the Acceleration Studies Foundation, and co-founder of the Evo-Devo Universe research community, and the Brain Preservation Foundation. He is editor of Evolution, Development, and Complexity (Springer 2019), and Introduction to Foresight: Personal, Team, and Organizational Adaptiveness (Foresight U Press 2022). He is also author of The Transcension Hypothesis (2011), the proposal that universal development guides leading adaptive networks increasingly into physical and virtual inner space.

A talk for the ‘Stepping into the Future‘conference (April 2022).

The Goodness of the Universe: Outer Space, Inner Space, and the Future of Networks /w John Smart

Many thanks for tuning in!

Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?

Is The Singularity And The Transcendence Of Artificial Intelligence A Key Factor For A New Era Of Humanity?

However, despite these advances, human progress is never without risks. Therefore, we must address urgent challenges, including the lack of transparency in algorithms, potential intrinsic biases and the possibility of AI usage for destructive purposes.

Philosophical And Ethical Implications

The singularity and transcendence of AI could imply a radical redefinition of the relationship between humans and technology in our society. A typical key question that may arise in this context is, “If AI surpasses human intelligence, who—or what—should make critical decisions about the planet’s future?” Looking even further, the concretization of transcendent AI could challenge the very concept of the soul, prompting theologians, philosophers and scientists to reconsider the basic foundations of beliefs established for centuries over human history.

Xanadu Quantum Technologies builds world’s first universal photonic quantum computer

Aurora consists of four photonically interconnected modular and independent server racks, containing 35 photonic chips and 13km of fiber optics. The system operates at room temperature and is fully automated, which Xanadu says makes it capable of running “for hours without any human intervention.”

The company added that in principle, Aurora could be scaled up to “thousands of server racks and millions of qubits today, realizing the ultimate goal of a quantum data center.” In a blog post detailing Aurora, Xanadu CTO Zachary Vernon said the machine represents the “very first time [Xanadu] – or anyone else for that matter – have combined all the subsystems necessary to implement universal and fault-tolerant quantum computation in a photonic architecture.”

OpenAI’s Sam Altman SHOCKINGLY Admits: “OpenAI Must Learn From DeepSeek”

Description:
Sam Altman admitted OpenAI might have been wrong about keeping its AI models private and acknowledged DeepSeek’s open-source approach is making waves in the industry. Meanwhile, DeepSeek claims to have built an AI model as powerful as OpenAI’s GPT-o1 for a fraction of the cost, raising concerns about potential data theft and U.S. chip restrictions. At the same time, Altman is pushing a $500 billion AI data center project called “Stargate” while facing a personal lawsuit, as Google quietly adjusts its AI strategy and Microsoft investigates DeepSeek’s rapid rise.

*Key Topics:*
- *Sam Altman’s shocking admission* about OpenAI’s past mistakes and DeepSeek’s rising influence.
- How *DeepSeek claims to rival OpenAI’s GPT-o1* at a fraction of the cost, raising legal concerns.
- The *AI arms race escalates* as OpenAI, DeepSeek, Microsoft, and Google battle for dominance.

*What You’ll Learn:*
- Why *OpenAI might change its stance on open-source AI* after DeepSeek’s disruptive impact.
- How *Microsoft is investigating DeepSeek* over alleged unauthorized use of OpenAI’s data.
- The *$500 billion “Stargate” project* and why experts doubt Altman’s ambitious AI infrastructure plans.

*Why It Matters:*
This video explores the *intensifying AI war, where **DeepSeek’s bold claims* challenge industry giants, forcing OpenAI, Google, and Microsoft to rethink their strategies while massive investments reshape the future of artificial intelligence.

*DISCLAIMER:*
This video analyzes the latest AI developments, including *OpenAI’s internal struggles, DeepSeek’s rapid rise, and the shifting landscape of AI innovation and competition*.

#AI #DeepSeek #OpenAI

Unitree’s G1 Humanoid Robots Shown Running in New Video

Unitree, a Chinese robotics company competing with outfits like Boston Dynamics, Tesla, Agility Robotics and others, has unveiled a new video of its humanoid G1 and H1 robots, showing off some new moves.

The smaller, more affordable G1 robot is shown running, navigating uneven terrain and walking in a more natural way. Unitree told us that because the robots were operating in environments it hadn’t mapped with LIDAR, these demos were remote controlled.

Unitree’s taller H1 humanoid robot also showed off some new moves at a Spring Festival Gala. The robots performed a preset routine learned from data produced by human dancers. The company says “whole body AI motion control” kept the robots in sync and allowed the robots to respond to any unplanned changes or events.

Multilingual Computational Models Reveal Shared Brain Responses to 21 Languages

At the heart of language neuroscience lies a fundamental question: How does the human brain process the rich variety of languages? Recent developments in Natural Language Processing, particularly in multilingual neural network language models, offer a promising avenue to answer this question by providing a theory-agnostic way of representing linguistic content across languages. Our study leverages these advances to ask how the brains of native speakers of 21 languages respond to linguistic stimuli, and to what extent linguistic representations are similar across languages. We combined existing (12 languages across 4 language families; n=24 participants) and newly collected fMRI data (9 languages across 4 language families; n=27 participants) to evaluate a series of encoding models predicting brain activity in the language network based on representations from diverse multilingual language models (20 models across 8 model classes). We found evidence of cross-lingual robustness in the alignment between language representations in artificial and biological neural networks. Critically, we showed that the encoding models can be transferred zero-shot across languages, so that a model trained to predict brain activity in a set of languages can account for brain responses in a held-out language, even across language families. These results imply a shared component in the processing of different languages, plausibly related to a shared meaning space.

The authors have declared no competing interest.

/* */