Toggle light / dark theme

Scientists are developing ever-more powerful magnets to enable clean energy sources like fusion. But China’s dominance of the supply chain for rare-earth magnets threatens their global availability.

Bloomberg Primer cuts through the complex jargon to reveal the business behind technologies poised to transform global markets. This six-part, planet-spanning series offers a comprehensive look at the \.

Basically mushrooms can cure all major illnesses all over the human body and brain. If all the pharmaceutical companies got into business with Chinese medicine which has used mushrooms of all types we essentially have a no side effect system of 100 percent healing. Even the basic food pyramid has show essentially to prove beneficial to humans more than medicines. Also essentially nanotransfection for people that have lost limbs or lost any body part could in the future regenerate limbs similar to wolverine like in the marvel comics but at a slower pace but would heal anything while the mushrooms keep one well and fed. A lot of the American studies are a stop gap measure while mushrooms can cure things slowly but to 100 percent. Along with healthy eating and nanotransfection one could have all they need for any regeneration in the far future. In the future this technology and food could essentially allow for minimal down time healing inside and the foods would fuel the body. It could be put on a smartphone where even trillions of dollars would be saved getting doctor treatments down to a dollar or less for entire body scans and healing. It would be the first step towards Ironman but using the human body to heal itself and the foods to fuel regeneration.


The WHO has published the first list of priority fungal pathogens, which affect more than 300 million people and kill at least 1.5 million people every year. However, funding to control this scourge is less than 1.5% of that devoted to infectious diseases.

This document intends to provide a summary of the cybersecurity threats in Japan with reference to globally observed cyber landscape. It looks at various kinds of cyberattacks their quantum and impact as well as specific verticals that are targeted by various threat actors.

As in February, 2024, in Japan, an organisation faces an average of 1,003 attacks per week, with FakeUpdates being the top malware. Most malicious files are delivered via email, and Remote Code Execution is the most common vulnerability exploit. In recent times, major Japanese incidents include a sophisticated malware by a nation state, attacks on Nissan and JAXA, and data breaches at the University of Tokyo and CASIO. Globally, incidents include Ukrainian media hacks, a ransomware attack on U.S. schools, and disruptions in U.S. healthcare due to cyber-attacks. The document also covers trends in malware types, attack vectors, and impacted industries over the last 6 months.

The details provide an overview of the threat landscape and major incidents in Japan and globally, highlighting the prevalence of attacks, common malware types, and impact on various industries and organisations. The information described should create awareness and help businesses and government organisation prepare well to safely operate in a digital environment.

In today’s AI news, all eyes will be on Nvidia’s GPU Technology Conference this week, where the company is expected to unveil its next AI chips. Nvidia CEO Jensen Huang said he will share more about the upcoming Blackwell Ultra AI chip, Vera Rubin platform, and plans for upcoming products at the annual conference, known as the GTC, during the company’s fourth quarter earnings call.

In other advancements, after decades of relying on Google’s ten blue links to find everything, consumers are quickly adapting to a completely new format: AI chatbots that do the searching for them. Adobe analyzed “more than 1 trillion visits to U.S. retail sites” through its analytics platform, and conducted a survey of “more than 5,000 U.S. respondents” to better understand how people are using AI.

Meanwhile, Barry Eggers, Co-Founder and Managing Partner at Lightspeed Venture Partners, is a luminary in the venture capital industry. As the AI landscape continues to evolve, Barry discusses the challenge of building defensible AI startups. Beyond just access to models, AI startups need differentiated data, network effects, and unique applications to maintain a competitive edge.

Re thinking of starting a new business and need advice on what to do, your first move should be turning to an AI chatbot tool. That t answer who won the Oscars last year? IBM Fellow, Martin Keen explains how RAG (Retrieval-Augmented Generation) and CAG (Cache-Augmented Generation) address knowledge gaps in AI. Discover their strengths in real-time retrieval, scalability, and efficient workflows for smarter AI systems. + s Gemini 2.0 about to revolutionize image generation and editing? In this video, Tim is diving deep into Google We close out with, Anthropic researchers Ethan Perez, Joe Benton, and Akbir Khan discuss AI control—an approach to managing the risks of advanced AI systems. They discuss real-world evaluations showing how humans struggle to detect deceptive AI, the three major threat models researchers are working to mitigate, and the overall idea of controlling highly-capable AI systems whose goals may differ from our own.

Thats all for today, but AI is moving fast — subscribe and follow for more Neural News.

(https://open.substack.com/pub/remunerationlabs/p/nvidia-is-a…Share=true)


What excites you the most about the potential of quantum computers?

💡 Future Business Tech explores AI, emerging technologies, and future technologies.

SUBSCRIBE: https://bit.ly/3geLDGO

This video explores the future of quantum computing. Related terms: ai, future business tech, future technology, future tech, future business technologies, future technologies, quantum computing, etc.

ℹ️ Some links are affiliate links. They cost you nothing extra but help support the channel so I can create more videos like this.

#technology #quantumcomputing

In today’s AI news, OpenAI and Google are pushing the US government to allow their AI models to train on copyrighted material. Both companies outlined their stances in proposals published this week, with OpenAI arguing that applying fair use protections to AI “is a matter of national security.” The proposals come in response to a request from the White House, which asked for input on Trump’s AI Action Plan.

In other advancements, one of the bigger players in automation has scooped up a startup in the space in hopes of taking a bigger piece of that market. UiPath, as part of a quarterly result report last night that spelled tougher times ahead, also delivered what it hopes might prove a silver lining. It said it had acquired, a startup founded originally in Manchester, England.

S most advanced features are now available to free users. You And, the restrictive and inconsistent licensing of so-called ‘open’ AI models is creating significant uncertainty, particularly for commercial adoption, Nick Vidal, head of community at the Open Source Initiative, told TechCrunch. While these models are marketed as open, the actual terms impose various legal and practical hurdles that deter businesses from integrating them into their products or services.

S Kate Rooney sits down with Garry Tan, Y Combination president and CEO, at the accelerator On Inside the Code, Ankit Kumar, Sesame, and Anjney Midha, a16z on the Future of Voice AI. What goes into building a truly natural-sounding AI voice? Sesame’s cofounder and CTO, Ankit Kumar, joins a16z’s Anjney Midha for a deep dive into the research and engineering behind their voice technology.

Then, Nate B. Jones explains how AI is making intelligence cheaper, but software strategies built on user lock-in are failing. Historically, SaaS companies relied on retaining users by making it difficult to switch. However, as AI lowers the cost of building and refactoring, users move between tools more freely. The real challenge now is data interoperability—data remains siloed, making AI-generated content and workflows hard to integrate.

We close out with, AI is getting expensive…but it doesn’t have to be. NetworkChuck found a way to access all the major AI models– ChatGPT, Claude, Gemini, even Grok – without paying for multiple expensive subscriptions. Not only does he get unlimited access to the newest models, but he also has better security, more privacy, and a ton of features… this might be the best way to use AI.

Thats all for today, but AI is moving fast — subscribe and follow for more Neural News.

Finland is soon to become the first country in the world to attempt the burial of nuclear fuel waste in a geological tomb — where it is planned to be stored for the next 100,000 years.

The plan is to pack the spent nuclear fuel in watertight canisters and deposit them about 1312 feet (400 meters) below ground level in the forest of the southwest region of Finland.

Artificial Intelligence (AI) is, without a doubt, the defining technological breakthrough of our time. It represents not only a quantum leap in our ability to solve complex problems but also a mirror reflecting our ambitions, fears, and ethical dilemmas. As we witness its exponential growth, we cannot ignore the profound impact it is having on society. But are we heading toward a bright future or a dangerous precipice?

This opinion piece aims to foster critical reflection on AI’s role in the modern world and what it means for our collective future.

AI is no longer the stuff of science fiction. It is embedded in nearly every aspect of our lives, from the virtual assistants on our smartphones to the algorithms that recommend what to watch on Netflix or determine our eligibility for a bank loan. In medicine, AI is revolutionizing diagnostics and treatments, enabling the early detection of cancer and the personalization of therapies based on a patient’s genome. In education, adaptive learning platforms are democratizing access to knowledge by tailoring instruction to each student’s pace.

These advancements are undeniably impressive. AI promises a more efficient, safer, and fairer world. But is this promise being fulfilled? Or are we inadvertently creating new forms of inequality, where the benefits of technology are concentrated among a privileged few while others are left behind?

One of AI’s most pressing challenges is its impact on employment. Automation is eliminating jobs across various sectors, including manufacturing, services, and even traditionally “safe” fields such as law and accounting. Meanwhile, workforce reskilling is not keeping pace with technological disruption. The result? A growing divide between those equipped with the skills to thrive in the AI-driven era and those displaced by machines.

Another urgent concern is privacy. AI relies on vast amounts of data, and the massive collection of personal information raises serious questions about who controls these data and how they are used. We live in an era where our habits, preferences, and even emotions are continuously monitored and analyzed. This not only threatens our privacy but also opens the door to subtle forms of manipulation and social control.

Then, there is the issue of algorithmic bias. AI is only as good as the data it is trained on. If these data reflect existing biases, AI can perpetuate and even amplify societal injustices. We have already seen examples of this, such as facial recognition systems that fail to accurately identify individuals from minority groups or hiring algorithms that inadvertently discriminate based on gender. Far from being neutral, AI can become a tool of oppression if not carefully regulated.

Who Decides What Is Right?

AI forces us to confront profound ethical questions. When a self-driving car must choose between hitting a pedestrian or colliding with another vehicle, who decides the “right” choice? When AI is used to determine parole eligibility or distribute social benefits, how do we ensure these decisions are fair and transparent?

The reality is that AI is not just a technical tool—it is also a moral one. The choices we make today about how we develop and deploy AI will shape the future of humanity. But who is making these decisions? Currently, AI’s development is largely in the hands of big tech companies and governments, often without sufficient oversight from civil society. This is concerning because AI has the potential to impact all of us, regardless of our individual consent.

A Utopia or a Dystopia?

The future of AI remains uncertain. On one hand, we have the potential to create a technological utopia, where AI frees us from mundane tasks, enhances productivity, and allows us to focus on what truly matters: creativity, human connection, and collective well-being. On the other hand, there is the risk of a dystopia where AI is used to control, manipulate, and oppress—dividing society between those who control technology and those who are controlled by it.

The key to avoiding this dark scenario lies in regulation and education. We need robust laws that protect privacy, ensure transparency, and prevent AI’s misuse. But we also need to educate the public on the risks and opportunities of AI so they can make informed decisions and demand accountability from those in power.

Artificial Intelligence is, indeed, the Holy Grail of Technology. But unlike the medieval legend, this Grail is not hidden in a distant castle—it is in our hands, here and now. It is up to us to decide how we use it. Will AI be a tool for building a more just and equitable future, or will it become a weapon that exacerbates inequalities and threatens our freedom?

The answer depends on all of us. As citizens, we must demand transparency and accountability from those developing and implementing AI. As a society, we must ensure that the benefits of this technology are shared by all, not just a technocratic elite. And above all, we must remember that technology is not an end in itself but a means to achieve human progress.

The future of AI is the future we choose to build. And at this critical moment in history, we cannot afford to get it wrong. The Holy Grail is within our reach—but its true value will only be realized if we use it for the common good.

__
Copyright © 2025, Henrique Jorge

[ This article was originally published in Portuguese in SAPO’s technology section at: https://tek.sapo.pt/opiniao/artigos/o-santo-graal-da-tecnologia ]

In early February, an Australian man in his 40s became the first person in the world to leave hospital with a virtually unbreakable heart made of metal.

‘Beating’ in his chest was a titanium pump about the size of a fist. For 105 days, the metal organ’s levitating propeller pushed blood to the man’s lungs and kept him alive as he went about his usual business.

On March 6, when a human donor heart became available, the man’s titanium heart was swapped out for the real thing. Doctors say without the metal stop-gap, the patient’s real heart would have failed before a donor became available.

In today’s AI news, believe it or not AI is alive and well, and it’s clearly going to change a lot of things forever. My personal epiphany happened just the other day, while I was “vibe coding” a personal software project. Those of us who have never written a line of code in our lives, but create software programs and applications using AI tools like Bolt or Lovable are called vibe coders.

S how these tools improve automation, multi-agent collaboration, and workflow orchestration for developers. Before we dig into what Then, Anthropic’s CEO Dario Amodei is worried that spies, likely from China, are getting their hands on costly “algorithmic secrets” from the U.S.’s top AI companies — and he wants the U.S. government to step in. Speaking at a Council on Foreign Relations event on Monday, Amodei said that China is known for its “large-scale industrial espionage” and that AI companies like Anthropic are almost certainly being targeted.

Meanwhile, despite all the hype, very few people have had a chance to use Manus. Currently, under 1% of the users on the wait list have received an invite code. It’s unclear how many people are on this list, but for a sense of how much interest there is, Manus’s Discord channel has more than 186,000 members. MIT Technology Review was able to obtain access to Manus, and they gave it a test-drive.

In videos, join Palantir CEO Alexander Karp with New York Times DealBook creator Andrew Ross Sorkin on the promises and peril of Silicon Valley, tech’s changing relationship with Washington, and what it means for our future — and his new book, The Technological Republic. Named “Best CEO of 2024” by The Economist, Alexander Karp is a vital player in Silicon Valley as the CEO of Palantir.

Then, Piers Linney, Co-founder of Implement AI, discusses how artificial intelligence and automation can be maximized across businesses on CNBC International Live. Linney says AI poses a threat to the highest income knowledge workers around the world.

Meanwhile, Nate B. Jones is back with some commentary on how OpenAI has launched a new API aimed at helping developers build AI agents, but its strategic impact remains unclear. While enterprises with strong LLM expertise are already using tools like LangChain effectively, smaller teams struggle with agent complexity. Nate says, despite being a high-quality API, it lacks a distinct differentiator beyond OpenAI’s own ecosystem.

We close out with, Celestial AI CEO Dave Lazovsky outlines how their “Photonic Fabric” technology helps to scale AI as the company raises $250 million in their latest funding round, valuing the company at $2.5 billion. Thats all for today, but AI is moving fast — subscribe.