Toggle light / dark theme

If you think I live in the twilight zone your right.


As a computational functionalist, I think the mind is a system that exists in this universe and operates according to the laws of physics. Which means that, in principle, there shouldn’t be any reason why the information and dispositions that make up a mind can’t be recorded and copied into another substrate someday, such as a digital environment.

To be clear, I think this is unlikely to happen anytime soon. I’m not in the technological singularity camp that sees us all getting uploaded into the cloud in a decade or two, the infamous “rapture of the nerds”. We need to understand the brain far better than we currently do, and that seems several decades to centuries away. Of course, if it is possible to do it anytime soon, it won’t be accomplished by anyone who’s already decided it’s impossible, so I enthusiastically cheer efforts in this area, as long as it’s real science.

There have always been a number of objections to the idea of uploading. Many people just reflexively assume it’s categorically impossible. Certainly we don’t have the technology today, but short of assuming the mind is at least partially non-physical, it’s hard to see what the ultimate obstacle might be. Even with that assumption, who can say that a copied mind wouldn’t have those non-physical properties? David Chalmers, a property dualist, sees those non-physical properties as corresponding with the right functionality, so for him AI consciousness and mind copying remain a possibility.

For example, Real Entrepreneur Women, a career coaching service, has developed an AI coach called Skye. “Skye is designed to help female coaches cut through overwhelm by providing actionable strategies and personalized support to grow their businesses,” said founder Sophie Musumeci. “She’s like having a dedicated business strategist in your pocket – streamlining decision-making, creating tailored content, and helping clients stay consistent. It’s AI with heart, designed to scale human connection in industries where trust and relationships are everything.”

In the next few years, Musumeci predicted, “I see democratized AI creating new business models where the gap between big and small players closes entirely, giving more entrepreneurs the confidence and capability to thrive.”

Education is another area ripe for AI disruption, and it’s possibilities for hyper-personalization in learning. “The current education system in USA is designed to educate the masses,” said Andy Thurai, principle analyst with Constellation Research. “It assumes everyone is at the same skill level and same interest in areas of topic and same expertise. It tries to push the information down our throats, and forces us to learn in a certain way.”

In today’s AI news, Backed by $200 million in funding, Scott Wu and his team at Cognition are building an AI tool that could potentially disintegrate the whole industry, at a $2 Billion valuation. Devin is an autonomous AI agent that, in theory, writes the code itself—no people involved—and can complete entire projects typically assigned to developers.

In other advancements, OpenAI is changing how it trains AI models to explicitly embrace “intellectual freedom … no matter how challenging or controversial a topic may be,” the company says in a new policy. OpenAI is releasing a significantly expanded version of its Model Spec, a document that defines how its AI models should behave — and is making it free for anyone to use or modify.

Then, xAI, the artificial intelligence company founded by Elon Musk, is set to launch Grok 3 on Monday, Feb. 17. According to xAI, this latest version of its chatbot, which Musk describes as “scary smart,” represents a major step forward, improving reasoning, computational power and adaptability. Grok 3’s development was accelerated by its Colossus supercomputer, which was built in just eight months, powered by 100,000 Nvidia H100 GPUs.

And, large language models can learn complex reasoning tasks without relying on large datasets, according to a new study by researchers at Shanghai Jiao Tong University. Their findings show that with just a small batch of well-curated examples, you can train an LLM for tasks that were thought to require tens of thousands of training instances.

S new o1 model, which focuses on slower, more deliberate reasoning — much like how humans think — in order to solve complex problems. ” + Then, join Turing Award laureate Yann LeCun—Chief AI Scientist at Meta and Professor at NYU—as he discusses with Link Ventures’ John Werner, the future of artificial intelligence and how open-source development is driving innovation. In this wide-ranging conversation, LeCun explains why AI systems won’t “take over” but will instead serve as empowering assistants.

Meanwhile, when confronted with a constant stream of new AI tools, it can be stressful to make the best choice, especially with hype of “the next big thing. In the opening keynote from the Gartner IT Infrastructure, Operations & Cloud Strategies Conference, Gartner experts Autumn Stanish, Hassan Ennaciri and Roger Williams equip you with insights and guidance on the AI, cloud and platform trends.

We close out with expert panel, at AI House, led by Moderator Mina Al-Oraibi, The National delves into how a globally integrated AI ecosystem can revolutionize smart cities by enhancing efficiency, sustainability, and citizen well-being. Panelists include; Thomas Pramotedham Presight, Juan Lavista Ferres, Microsoft AI For Good Research Lab, Guillem Martínez Roura, International Telecommunications Union, and Anna Gawlikowska, SwissAI.

A worldwide MASS BAN of DeepSeek AI has just begun, and the implications are shocking! Governments, corporations, and AI regulators are now cracking down on one of the fastest-growing AI models, sparking intense debates about AI safety, censorship, and control. But why is DeepSeek AI being banned, and what does this mean for the future of artificial intelligence?

In this video, we break down why countries are banning DeepSeek AI, the real reasons behind this massive restriction, and what this means for the AI industry and everyday users. Is this about security risks, misinformation, or something even bigger? And how will OpenAI, Google, and other tech giants respond to this sudden AI crackdown?

With the AI revolution accelerating faster than governments can regulate, this global ban on DeepSeek could signal the beginning of tighter AI control worldwide. But is this about protecting people—or protecting power? Watch till the end to find out!

Why is DeepSeek AI being banned? What does this mean for the future of AI? Is this the start of global AI censorship? This video will answer all these questions and more—so don’t miss it!

#ai.
#artificialintelligence.
#deepseek.

*******************

Researchers have created a new AI algorithm called Torque Clustering, which greatly enhances an AI system’s ability to learn and identify patterns in data on its own, without human input.

Researchers have developed a new AI algorithm, Torque Clustering, which more closely mimics natural intelligence than existing methods. This advanced approach enhances AI’s ability to learn and identify patterns in data independently, without human intervention.

Torque Clustering is designed to efficiently analyze large datasets across various fields, including biology, chemistry, astronomy, psychology, finance, and medicine. By uncovering hidden patterns, it can provide valuable insights, such as detecting disease trends, identifying fraudulent activities, and understanding human behavior.

Elon Musk has announced that xAI will launch its large language model (LLM), Grok 3, on Monday at 8:30PM Pacific Time (that’s 10:00AM India time). The billionaire also promised that Grok 3 will be the smartest AI in the world.

Also Read | Elon Musk reacts to Ashley St. Clair claiming to be the mother of his 13th child

Confirming the launch of Grok 3 on X (formerly Twitter), Musk wrote, “Grok 3 release with live demo on Monday night at 8pm PT. Smartest AI on Earth.”

As tech billionaires around the world continue to experiment with artificial intelligence, a bizarre trend of AI ‘partners’ has emerged whereby people engage in relationships with chatbots — but, far from solving an epidemic of loneliness among singletons, a dark new trend has emerged in which men can indulge in emotionally abusive behaviour.

Threads on Reddit are exposing this disturbing desire which sees people use smartphone apps like Replika to create virtual partners they can verbally berate, abuse, and ‘experiment’ with.

Replika allows people to send and receive messages from a virtual companion or avatar which can be ‘set’ or trained to become a friend or mentor — though more commonly a romantic partner.

Devices that leverage quantum mechanics effects, broadly referred to as quantum technologies, could help to tackle some real-world problems faster and more efficiently. In recent years, physicists and engineers have introduced various promising quantum technologies, including so-called quantum sensors.

Networks of quantum sensors could theoretically be used to measure specific parameters with remarkable precision. These networks leverage a quantum phenomenon known as entanglement, which entails a sustained connection between particles, which allows them to instantly share information with each other, even at a distance.

While quantum sensor networks (QSNs) could have various advantageous real-world applications, their effective deployment also relies on the ability to ensure that the information shared between sensors remains private and is not accessible to malicious third parties.

Despite today’s AI-driven tools for modeling a bioprocess and a host of sensors to track the progress of a bioprocess in action, an expert’s hand still plays a key role in making protein-based drugs. As Hiller put it: “The science (or art!) of preparing very concentrated feed mixtures often relies on the careful order of addition of chemicals, manipulation of pH (up and down) and temperature, and separate preparation of certain concentrated solutions before addition to the bulk feed mixture.”

Culturing cells always included some art, with a bit of superstition thrown in the mix. When I worked in a cell-culture lab in the early 1980s, there were rumors of cells dying when an incubator was moved from one side of a room to another. So people rarely moved anything. Plus, if the media included horse serum, scientists shuddered if a batch came from a different herd. Maybe some of the superstition disappeared over the decades, but some of the art remains, as Hiller confirmed.

Still, science underlies the ongoing attempt to replicate a cell’s natural environment during a bioprocess. Instead of just putting the cells in a vat filled with medium, which is the essence of batch processing, perfusion can add nutrients and remove waste. As Hiller noted, perfusion culture “is somewhat analogous to the processes that occur for cells within an organ in the body.”