A successful collaboration involving a trio of research institutions has yielded a roadmap toward an economically viable process for using enzymes to recycle plastics.
The researchers, from the National Renewable Energy Laboratory (NREL), the University of Massachusetts Lowell, and the University of Portsmouth in England, previously partnered on the biological engineering of improved PETase enzymes that can break down polyethylene terephthalate (PET). With its low manufacturing cost and excellent material properties, PET is used extensively in single-use packaging, soda bottles, and textiles.
The new study, published in Nature Chemical Engineering, combines previous fundamental research with advanced chemical engineering, process development, and techno-economic analysis to lay the blueprints for enzyme-based PET recycling at an industrial scale.
Greg Egan’s Diaspora is one of the most ambitious and mind-bending science fiction novels ever published. It came out in 1997 and originally started as a short story called “Wang’s Carpets.” That story ended up as a chapter in the novel. Diaspora is: dense, smart, and way ahead of its time. This is hard science fiction to the core. Egan invents entire new branches of physics. He reimagines life, consciousness, time, space — even what it means to be human. The book doesn’t ease you in. There’s a glossary, invented physics theories like Kozuch Theory, and characters that don’t even have genders. But if you stick with it, what you get isn’t just a story, it’s a look at what the future might actually become. By the year 2,975, humanity isn’t one species anymore. It’s split into three groups: Fleshers: The biological humans, including the “statics” (unchanged baseline humans) and all sorts of heavily modified versions — underwater people, gene-hacked thinkers, even “dream apes” who gave up speech to live closer to nature. Gleisners: AIs in robotic bodies that live in space. They care about the physical world and experience time like regular humans. They’re kind of old-school — still sending ships to the stars, trying to build things in real space. Citizens: These are digital minds that live entirely in simulated worlds called polises.
👽 Please consider supporting this channel on Patreon: / ideasoficeandfire. or PAYPAL — https://paypal.me/QuinnsIdeas?locale… 🎨 Art: Adobe Licensed. 🎵 Music: / @jamezdahlmusic 📚 Get These Books! Affiliate link* https://amzn.to/3HSixNx Quinn’s Discord: / discord FOLLOW QUINN ON TWITTER: Twitter: / ideasofice_fire I NOW HAVE A SUBREDDIT: / ideasoficeandfire Quinn’s New Graphic Novel: https://www.quinnhoward.net/theliebeh… Quinn’s Comic Books: https://www.quinnhoward.net/shop Quinn’s Website: https://www.quinnhoward.net Like me on Facebook!: / ioiaf 🎥 Mentioned Videos 🎬 Other Playlist Three-Body Playlist: • Three Body Problem H.P. Lovecraft Playlist: • LOVECRAFT Hyperion Playlist: • Hyperion Dune Playlist: • Dune Lore Explained Foundation Playlist: • Isaac Asimov Feel free to leave a comment like and subscribe! Thanks For Watching!.
A better understanding of how the human brain represents objects that exist in nature, such as rocks, plants, animals, and so on, could have interesting implications for research in various fields, including psychology, neuroscience and computer science. Specifically, it could help shed new light on how humans interpret sensory information and complete different real-world tasks, which could also inform the development of artificial intelligence (AI) techniques that closely emulate biological and mental processes.
Multimodal large language models (LLMs), such as the latest models underpinning the functioning of the popular conversational platform ChatGPT, have been found to be highly effective computational techniques for the analysis and generation of texts in various human languages, images and even short videos.
As the texts and images generated by these models are often very convincing, to the point that they could appear to be human-created content, multimodal LLMs could be interesting experimental tools for studying the underpinnings of object representations.
Deep learning models, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs) are designed to partly emulate the functioning and structure of biological neural networks. As a result, in addition to tackling various real-world computational problems, they could help neuroscientists and psychologists to better understand the underpinnings of specific sensory or cognitive processes.
Researchers at Osnabrück University, Freie Universität Berlin and other institutes recently developed a new class of artificial neural networks (ANNs) that could mimic the human visual system better than CNNs and other existing deep learning algorithms. Their newly proposed, visual system-inspired computational techniques, dubbed all-topographic neural networks (All-TNNs), are introduced in a paper published in Nature Human Behaviour.
“Previously, the most powerful models for understanding how the brain processes visual information were derived off of AI vision models,” Dr. Tim Kietzmann, senior author of the paper, told Tech Xplore.
Does the use of computer models in physics change the way we see the universe? How far reaching are the implications of computation irreducibility? Are observer limitations key to the way we conceive the laws of physics? In this episode we have the difficult yet beautiful topic of trying to model complex systems like nature and the universe computationally to get into; and how beyond a low level of complexity all systems, seem to become equally unpredictable. We have a whole episode in this series on Complexity Theory in biology and nature, but today we’re going to be taking a more physics and computational slant. Another key element to this episode is Observer Theory, because we have to take into account the perceptual limitations of our species’ context and perspective, if we want to understand how the laws of physics that we’ve worked out from our environment, are not and cannot be fixed and universal but rather will always be perspective bound, within a multitude of alternative branches of possible reality with alternative possible computational rules. We’ll then connect this multi-computational approach to a reinterpretation of Entropy and the 2nd law of thermodynamics. The fact that my guest has been building on these ideas for over 40 years, creating computer language and AI solutions, to map his deep theories of computational physics, makes him the ideal guest to help us unpack this topic. He is physicist, computer scientist and tech entrepreneur Stephen Wolfram. In 1987 he left academia at Caltech and Princeton behind and devoted himself to his computer science intuitions at his company Wolfram Research. He’s published many blog articles about his ideas, and written many influential books including “A New kind of Science”, and more recently “A Project to Find the Fundamental Theory of Physics”, and “Computer Modelling and Simulation of Dynamic Systems”, and just out in 2023 “The Second Law” about the mystery of Entropy. One of the most wonderful things about Stephen Wolfram is that, despite his visionary insight into reality, he really loves to be ‘in the moment’ with his thinking, engaging in socratic dialogue, staying open to perspectives other than his own and allowing his old ideas to be updated if something comes up that contradicts them; and given how quickly the fields of physics and computer science are evolving I think his humility and conceptual flexibility gives us a fine example of how we should update how we do science as we go.
What we discuss: 00:00 Intro. 07:45 The history of scientific models of reality: structural, mathematical and computational. 14:40 Late 2010’s: a shift to computational models of systems. 20:20 The Principle of Computational Equivalence (PCE) 24:45 Computational Irreducibility — the process that means you can’t predict the outcome in advance. 27:50 The importance of the passage of time to Consciousness. 28:45 Irreducibility and the limits of science. 33:30 Godel’s Incompleteness Theorem meets Computational Irreducibility. 42:20 Observer Theory and the Wolfram Physics Project. 45:30 Modelling the relations between discrete units of Space: Hypergraphs. 47:30 The progress of time is the computational process that is updating the network of relations. 50:30 We ’make’ space. 51:30 Branchial Space — different quantum histories of the world, branching and merging. 54:30 We perceive space and matter to be continuous because we’re very big compared to the discrete elements. 56:30 Branchial Space VS Many Worlds interpretation. 58:50 Rulial Space: All possible rules of all possible interconnected branches. 01:07:30 Wolfram Language bridges human thinking about their perspective with what is computationally possible. 01:11:00 Computational Intelligence is everywhere in the universe. e.g. the weather. 01:19:30 The Measurement problem of QM meets computational irreducibility and observer theory. 01:20:30 Entanglement explained — common ancestors in branchial space. 01:32:40 Inviting Stephen back for a separate episode on AI safety, safety solutions and applications for science, as we did’t have time. 01:37:30 At the molecular level the laws of physics are reversible. 01:40:30 What looks random to us in entropy is actually full of the data. 01:45:30 Entropy defined in computational terms. 01:50:30 If we ever overcame our finite minds, there would be no coherent concept of existence. 01:51:30 Parallels between modern physics and ancient eastern mysticism and cosmology. 01:55:30 Reductionism in an irreducible world: saying a lot from very little input.
References: “The Second Law: Resolving the Mystery of the Second Law of Thermodynamics”, Stephen Wolfram.
Early brain development is a biological black box. While scientists have devised multiple ways to record electrical signals in adult brains, these techniques don’t work for embryos.
A team at Harvard has now managed to peek into the box—at least when it comes to amphibians and rodents. They developed an electrical array using a flexible, tofu-like material that seamlessly embeds into the early developing brain. As the brain grows, the implant stretches and shifts, continuously recording individual neurons without harming the embryo.
“There is just no ability currently to measure neural activity during early neural development. Our technology will really enable an uncharted area,” said study author Jia Liu in a press release.
A research team led by Professor Huang Xingjiu at the Hefei Institutes of Physical Science of the Chinese Academy of Sciences has developed a highly stable adaptive integrated interface for ion sensing. The study was published as an inside front cover article in Advanced Materials.
All-solid-state ion-selective electrode serves as a fundamental component in the ion sensing of intelligent biological and chemical sensors. While the researchers had previously developed several transducer materials with a sandwich-type interface to detect common ions, the performance of such sensors was often limited by interface material and structure.
To overcome these challenges, the team introduced a novel interface using lipophilic molybdenum disulfide (MoS₂) regulated by cetyltrimethylammonium (CTA⁺). This structure enables spatiotemporal adaptive integration—assembling single-piece sensing layers atop efficient transduction layers.
A comprehensive analysis of the cell-specific molecular regulatory mechanisms underlying post-traumatic stress disorder in the human prefrontal cortex.