What if our understanding of time as a linear sequence of events is merely an illusion created by the brain’s processing of reality? Could time itself be an emergent phenomenon, arising from the complex interplay of quantum mechanics, relativity, and consciousness? How might the brain’s multidimensional computations, reflecting patterns found in the universe, reveal a deeper connection between mind and cosmos? Is it possible that advancements in our understanding of temporal mechanics could one day make time travel a practical reality rather than a theoretical concept? Could Quantum AI and Reversible Quantum Computing provide the tools to simulate, manipulate, and even reshape the flow of time, offering practical applications of D-Theory that bridge the gap between theoretical physics and transformative technologies? These profound questions lie at the heart of Temporal Mechanics: D-Theory as a Critical Upgrade to Our Understanding of the Nature of Time, my 2025 paper and book. D-Theory, also referred to as Quantum Temporal Mechanics, Digital Presentism, and D-Series, challenges conventional views of time as a fixed, universal backdrop to reality and instead redefines it as a dynamic interplay between the mind and the cosmos.
Category: robotics/AI – Page 56
Basic Machine learning and it’s application in solid state physics: An approach to identify the crystalline structure of solids
Posted in chemistry, particle physics, robotics/AI | Leave a Comment on Basic Machine learning and it’s application in solid state physics: An approach to identify the crystalline structure of solids
All solids have a crystal structure that shows the spatial arrangement of atoms, ions or molecules in the lattice. These crystal structures are often determined by a method known as X-ray diffraction technique (XRD).
These crystal structures play an import role in determining many physical properties such as the electronic band structure, cleavage and explains many of their physical and chemical properties.
This article aims to discuss an approach to identify these structures by various machine learning and deep learning methods. It demonstrates how supervised machine learning and deep learning approaches and help in determining various crystal structures of solids.
Tohoku University scientists created lab-grown neural networks using microfluidic devices, mimicking natural brain activity and enabling advanced studies of learning and memory.
The phrase “Neurons that fire together, wire together” encapsulates the principle of neural plasticity in the human brain. However, neurons grown in a laboratory dish do not typically adhere to these rules. Instead, cultured neurons often form random, unstructured networks where all cells fire simultaneously, failing to mimic the organized and meaningful connections seen in a real brain. As a result, these in-vitro models provide only limited insights into how learning occurs in living systems.
What if, however, we could create in-vitro neurons that more closely replicate natural brain behavior?
Princeton engineers have developed a scalable 3D printing technique to produce soft plastics with customizable stretchiness and flexibility, while also being recyclable and cost-effective—qualities rarely combined in commercially available materials.
In a study published in Advanced Functional Materials, a team led by Emily Davidson detailed how they used thermoplastic elastomers—a class of widely available polymers—to create 3D-printed structures with adjustable stiffness. By designing the 3D printer’s print path, the engineers could program the plastic’s physical properties, allowing devices to stretch and flex in one direction while remaining rigid in another.
Davidson, an assistant professor of chemical and biological engineering, highlighted the potential applications of this technique in fields such as soft robotics, medical devices, prosthetics, lightweight helmets, and custom high-performance shoe soles.
As tech companies release a slew of generative AI updates, there’s a growing risk that educational practices and policies are struggling to keep up with new capabilities.
2024: A year when AI, quantum computing, and cybersecurity converged to redefine our digital landscape. For those navigating these complex technological frontiers, clarity became the most critical currency.
Inside Cyber, Key moments that resonated with our community:
• Cybersecurity Trends for 2025 Diving deep into the evolving threat landscape and strategic priorities.
• AI, 5G, and Quantum: Innovation and Cybersecurity Risks Exploring the intersection of emerging technologies and security challenges https://lnkd.in/ex3ktwuF
• PCI DSS v4.0 Compliance Strategies Practical guidance for adapting to critical security standards https://lnkd.in/eK_mviZd.
Two robots, Levita’s Mars and Da Vinci SP, combined for a groundbreaking prostate removal surgery, advancing precision in minimally invasive care.
Read more.
Scientists have discovered that future robots might be able to gauge how you are feeling by just touching human skin. According to a new study published in the journal IEEE Access, researchers used skin conductance as a way to figure out how an individual was feeling. Skin conductance is a measure of how well skin conducts electricity, which usually changes in response to sweat secretion and nerve activity, signifying different human emotional states.
Traditional emotion-detection technologies such as facial recognition and speech analysis, are often prone to error, especially in suboptimal audio-visual conditions. However, scientists believe that skin conductance offers a potential workaround, providing a non-invasive way to capture emotion in real-time.
For the study, the emotional responses of 33 participants were measured by showing them emotionally evocative videos and measuring their skin conductance. The findings revealed distinct patterns for different emotions: fear responses were the longest-lasting, suggesting an evolutionary alert mechanism; family bonding emotions, a blend of happiness and sadness, showed slower responses; and humour triggered quick but fleeting reactions.
Chinese AI startup DeepSeek has released what appears to be one of the most powerful open-source language models to date, trained at a cost of just $5.5 million using restricted Nvidia H800 GPUs.
Meta, Aitomatic, and other members of the AI Alliance have released the world’s first large language model specifically trained on the needs of the semiconductor industry.