Toggle light / dark theme

For the first time, a team of researchers at Lawrence Livermore National Laboratory (LLNL) quantified and rigorously studied the effect of metal strength on accurately modeling coupled metal/high explosive (HE) experiments, shedding light on an elusive variable in an important model for national security and defense applications.

The team used a Bayesian approach to quantify with tantalum and two common explosive materials and integrated it into a coupled metal/HE . Their findings could lead to more accurate models for equation-of-state-studies, which assess the state of matter a material exists in under different conditions. Their paper —featured as an editor’s pick in the Journal of Applied Physics —also suggested that metal strength uncertainty may have an insignificant effect on result.

“There has been a long-standing field lore that HE model calibrations are sensitive to the metal strength,” said Matt Nelms, the paper’s first author and a group leader in LLNL’s Computational Engineering Division (CED). “By using a rigorous Bayesian approach, we found that this is not the case, at least when using tantalum.”

However, despite these advances, human progress is never without risks. Therefore, we must address urgent challenges, including the lack of transparency in algorithms, potential intrinsic biases and the possibility of AI usage for destructive purposes.

Philosophical And Ethical Implications

The singularity and transcendence of AI could imply a radical redefinition of the relationship between humans and technology in our society. A typical key question that may arise in this context is, “If AI surpasses human intelligence, who—or what—should make critical decisions about the planet’s future?” Looking even further, the concretization of transcendent AI could challenge the very concept of the soul, prompting theologians, philosophers and scientists to reconsider the basic foundations of beliefs established for centuries over human history.

Recent research demonstrates that brain organoids can indeed “learn” and perform tasks, thanks to AI-driven training techniques inspired by neuroscience and machine learning. AI technologies are essential here, as they decode complex neural data from the organoids, allowing scientists to observe how they adjust their cellular networks in response to stimuli. These AI algorithms also control the feedback signals, creating a biofeedback loop that allows the organoids to adapt and even demonstrate short-term memory (Bai et al. 2024).

One technique central to AI-integrated organoid computing is reservoir computing, a model traditionally used in silicon-based computing. In an open-loop setup, AI algorithms interact with organoids as they serve as the “reservoir,” for processing input signals and dynamically adjusting their responses. By interpreting these responses, researchers can classify, predict, and understand how organoids adapt to specific inputs, suggesting the potential for simple computational processing within a biological substrate (Kagan et al. 2023; Aaser et al. n.d.).

Simulation Metaphysics extends beyond the conventional Simulation Theory, framing reality not merely as an arbitrary digital construct but as an ontological stratification. In this self-simulating, cybernetic manifold, the fundamental fabric of existence is computational, governed by algorithmic processes that generate physical laws and emergent minds. Under such a novel paradigm, the universe is conceived as an experiential matrix, an evolutionary substrate where the evolution of consciousness unfolds through nested layers of intelligence, progressively refining its self-awareness.

#SimulationMetaphysics #OmegaSingularity #CyberneticTheoryofMind #SimulationHypothesis #SimulationTheory #CosmologicalAlpha #DigitalPhysics #ontology

The research team, led by Professor Tobin Filleter, has engineered nanomaterials that offer unprecedented strength, weight, and customizability. These materials are composed of tiny building blocks, or repeating units, measuring just a few hundred nanometers – so small that over 100 lined up would barely match the thickness of a human hair.

The researchers used a multi-objective Bayesian optimization machine learning algorithm to predict optimal geometries for enhancing stress distribution and improving the strength-to-weight ratio of nano-architected designs. The algorithm only needed 400 data points, whereas others might need 20,000 or more, allowing the researchers to work with a smaller, high-quality data set. The Canadian team collaborated with Professor Seunghwa Ryu and PhD student Jinwook Yeo at the Korean Advanced Institute of Science & Technology for this step of the process.

This experiment was the first time scientists have applied machine learning to optimize nano-architected materials. According to Peter Serles, the lead author of the project’s paper published in Advanced Materials, the team was shocked by the improvements. It didn’t just replicate successful geometries from the training data; it learned from what changes to the shapes worked and what didn’t, enabling it to predict entirely new lattice geometries.

In today’s AI news, OpenAI is announcing a new AI Agent designed to help people who do intensive knowledge work in areas like finance, science, policy, and engineering and need thorough, precise, and reliable research. It could also be useful for anyone making major purchases.

In what most would consider a halcyon time for AI, an anachronistic source has just added their two cents to the ethos around the AI revolution. The Vatican released a significant broadside addressing the potential and risks of AI in a new high-tech world. It’s a very interesting look at these new technologies, with a focus on human worth and human dignity.

In other advances, the one-person micro-enterprise is far from a novel concept. Cheap on-demand AI compute, remote collaboration, payment processing APIs, social media, and e-commerce marketplaces have all made it easier to “go it alone” as an entrepreneur. But what about scaling that business into something meatier — a one-person Unicorn.

And, this morning, Brussels announced plans to develop an open source AI model of its own, with $56 million in funding to do it. The investment will fund top researchers from a handful of companies and universities across EU countries as they develop a large language model that can work with the trading bloc’s 30 languages.

In videos, Lex Fridman speaks with Dylan Patel, Founder of SemiAnalysis, a semiconductor research and analysis company, and Nathan Lambert, a research scientist at Allen Institute for AI (Ai2) and author of an AI blog called Interconnects. They all discuss DeepSeek, China, OpenAI, NVIDIA, xAI, TSMC, Stargate, and AI Megaclusters.

Can a machine feel love, hate or grief? How do we write our laws around A.I.? Would you let an algorithm run the government? Watch my newest video where I attempt to answer these questions while introducing the concept of artificial Intelligence philosophy: an area of study that could take on these, and other mind-boggling questions in the future.

Notes and References.
[1] https://www.humanetech.com/who-we-are.
[2] https://brie.berkele y.edu/sites/default/files/brie_wp_2018-3.pdf.
[3] Jeff Hawkins — A Thousand Brains.
[4] https://2020.yang2020.com/policies/th

Supplementary Sources.
https://waitbutwhy.com/2015/01/artifi… Artificial Intelligence & Personhood: Crash Course Philosophy #23. Retrieved from • Artificial Intelligence & Personhood:… Follow Tomorrow Matters Everywhere: ▶️ YouTube — / @grantkeegantechphilosophy 📚 Official Site — https://tomorrowmatters.net 🐦 Twitter — / tomorrowm4tters 📷 Instagram — / tomorrow_matters_ 🎶 Tik Tok — / tomorrow_matters 🅿️ Patreon — / tomorrowmatters Song: Inukshuk — Too Far Gone [NCS Release] Music provided by NoCopyrightSounds Free Download/Stream: http://ncs.io/TooFarGone Watch: • Inukshuk — Too Far Gone | Electronic…
https://waitbutwhy.com/2015/01/artifi
Artificial Intelligence & Personhood: Crash Course Philosophy #23. Retrieved from • Artificial Intelligence & Personhood:…

Follow Tomorrow Matters Everywhere:

The idea of creating machines that can think and act like humans is smoothly transforming from fiction to reality. Humanoid robots, digital humans, ChatGPT, and unmanned cars — today there are many applications driven by artificial intelligence that surpass humans in speed, accuracy, efficiency and tirelessness. But only in narrow areas so far.
And yet, this gives us hope to see a real miracle in the near future — artificial intelligence equal or superior to human intelligence in all parameters!
Can AI compare with us? Surpass us? Replace us? Deceive us and pursue its own goals? Today we will tell you how a miracle of nature such as the human brain differs from the main technology of the 21st century — artificial intelligence, and what prospects we have with AI in the future!

The journey of artificial intelligence (AI) is a captivating saga, dating back to 1956 when John McCarthy coined the term at a Dartmouth conference. Through the ensuing decades, AI witnessed three significant booms. Between the 1950s-70s, pioneers introduced groundbreaking neural perception networks and chat software. Though they foresaw AI surpassing human capabilities in a decade, this dream remained unfulfilled. By the 1980s, the second wave took shape, propelled by new machine learning techniques and neural networks, which promised innovations like speech recognition. Yet, many of these promises fell short.

But the tide turned in 2006. Deep learning emerged, and by 2016, AI systems like AlphaGo were defeating world champions. The third boom began, reinforced by large language models like ChatGPT, igniting discussions about amalgamating AI with humanoid robots. Discover more about this fascinating trend in our linked issue.

Our progress in cognitive psychology, neuroscience, quantum physics, and brain research has heavily influenced AI’s trajectory. Especially significant is our understanding of the human brain, pushing the boundaries of neural network development. Can AI truly emulate human cognition?

Artificial consciousness is the next frontier in AI. While artificial intelligence has advanced tremendously, creating machines that can surpass human capabilities in certain areas, true artificial consciousness represents a paradigm shift—moving beyond computation into subjective experience, self-awareness, and sentience.

In this video, we explore the profound implications of artificial consciousness, the defining characteristics that set it apart from traditional AI, and the groundbreaking work being done by McGinty AI in this field. McGinty AI is pioneering new frameworks, such as the McGinty Equation (MEQ) and Cognispheric Space (C-space), to measure and understand consciousness levels in artificial and biological entities. These advancements provide a foundation for building truly conscious AI systems.

The discussion also highlights real-world applications, including QuantumGuard+, an advanced cybersecurity system utilizing artificial consciousness to neutralize cyber threats, and HarmoniQ HyperBand, an AI-powered healthcare system that personalizes patient monitoring and diagnostics.

However, as we venture into artificial consciousness, we must navigate significant technical challenges and ethical considerations. Questions about autonomy, moral status, and responsible development are at the forefront of this revolutionary field. McGinty AI integrates ethical frameworks such as the Rotary Four-Way Test to ensure that artificial consciousness aligns with human values and benefits society.