As director of the MIT BioMicro Center (BMC), Stuart Levine ’97 wholeheartedly embraces the variety of challenges he tackles each day. One of over 50 core facilities providing shared resources across the Institute, the BMC supplies integrated high-throughput genomics, single-cell and spatial transcriptomic analysis, bioinformatics support, and data management to researchers across MIT. The BioMicro Center is part of the Integrated Genomics and Bioinformatics core facility at the Robert A. Swanson (1969) Biotechnology Center.
“Every day is a different day,” Levine says, “there are always new problems, new challenges, and the technology is continuing to move at an incredible pace.” After more than 15 years in the role, Levine is grateful that the breadth of his work allows him to seek solutions for so many scientific problems.
By combining bioinformatics expertise with biotech relationships and a focus on maximizing the impact of the center’s work, Levine brings the broad range of skills required to match the diversity of questions asked by investigators in MIT’s Department of Biology and Koch Institute for Integrative Cancer Research, as well as researchers across MIT’s campus.
While early language models could only process text, contemporary large language models now perform highly diverse tasks on different types of data. For instance, LLMs can understand many languages, generate computer code, solve math problems, or answer questions about images and audio.
MIT researchers probed the inner workings of LLMs to better understand how they process such assorted data, and found evidence that they share some similarities with the human brain.
Neuroscientists believe the human brain has a “semantic hub” in the anterior temporal lobe that integrates semantic information from various modalities, like visual data and tactile inputs. This semantic hub is connected to modality-specific “spokes” that route information to the hub. The MIT researchers found that LLMs use a similar mechanism by abstractly processing data from diverse modalities in a central, generalized way. For instance, a model that has English as its dominant language would rely on English as a central medium to process inputs in Japanese or reason about arithmetic, computer code, etc. Furthermore, the researchers demonstrate that they can intervene in a model’s semantic hub by using text in the model’s dominant language to change its outputs, even when the model is processing data in other languages.
MIT Associate Professor Bin Zhang takes a computational approach to studying the 3D structure of the genome: He uses computer simulations and generative AI to understand how a 2-meter-long string of DNA manages to fit inside a cell’s nucleus.
What if with the condition machine super intelligence is possible once one comes into existence it sends von Neumann machines that converts solar systems into computers of like power and intelligence such machines would be factories miles long and they as well would be do the same until the entire galaxy would become an artificially intelligent entity procreating matrioska brains.
Adi Newton’s track from the compilation “The Neuromancers. Music inspired by William Gibson’s universe” published by Unexplained Sounds Group: https://unexplainedsoundsgroup.bandca… dl, cd, book. Music by: Adi Newton, NYORAI, Oubys (Wannes Kolf), Mario Lino Stancati, Joel Gilardini, Tescon Pol, phoanøgramma, Dead Voices On Air, SIGILLUM S, Richard Bégin, André Uhl. Stories by: Stories by: Andrew Coulthard, Chris McAuley, Glynn Owen Barrass, J. Edwin Buja, Michael F. Housel, Paolo L. Bandera, Rusell Smeaton, Scott J. Couturier. The soundtrack of a future in flux As the father of cyberpunk, William Gibson imagined a world where technology and society collide, blurring the boundaries between human and machine, individual and system. His novels, particularly Neuromancer, painted a dystopian future where sprawling megacities pulse with neon, corporations rule from the shadows, and cyberspace serves as both playground and battlefield. In his vision, technology is a tool of empowerment and control, a paradox that resonates deeply in our contemporary world. Gibson’s work has long since transcended literature, becoming a blueprint for how we understand technology’s role in shaping our lives. The term cyberspace, which he coined, feels more real than ever in today’s internet-driven world. We live in a time where virtual spaces are as important as physical ones, where our identities shift between digital avatars and flesh-and-blood selves. The rapid rise of AI, neural interfaces, and virtual reality feels like a prophecy fulfilled — as though we’ve stepped into the pages of a Gibson novel. A SONIC LANDSCAPE OF THE FUTURE The influence of cyberpunk on contemporary music is undeniable. The genre’s aesthetic, with its dark, neon-lit streets and synth-driven soundscapes, has found its way into countless genres, from techno and industrial to synthwave and ambient. Electronic music, in particular, feels like the natural soundtrack of the cyberpunk world — synthetic, futuristic, and often eerie, it evokes the idea of a humanity at the edge of a technological abyss. The cyberpunk universe forces us to confront uncomfortable truths about the way we live today: the increasing corporatization of our world, the erosion of privacy, and the creeping sense that technology is evolving faster than we can control. Though cyberpunk as a literary genre originated in the 1980s, its influence has only grown in the decades since. In music, the cyberpunk ethos is more relevant than ever. Artists today are embracing the tools of technology not just to create new sounds, but to challenge the very definition of music itself. THE FUTURE OF MUSIC IN A CYBERPUNK WORLD Much like Gibson’s writing, the music in this compilation embraces technology not only as a tool but as a medium of expression. It’s no coincidence that many of the artists featured here draw from electronic, industrial, and experimental music scenes—genres that have consistently pushed the boundaries of sound and technology. The contributions of Adi Newton, a pioneering figure in cyberpunk music, along with artists such as Dead Voices On Air, Sigillum S, Tescon Pol, Oubys, Joel Gilardini, phoanøgramma, Richard Bégin, Mario Lino Stancati, Nyorai, Wahn, and André Uhl, each capture unique facets of the cyberpunk universe. Their work spans from the gritty, rebellious underworlds of hackers, to the cold, calculated precision of AI, and the vast, sprawling virtual landscapes where anything is possible—and everything is controlled. These tracks serve as a sonic exploration of Gibson’s vision, translating the technological, dystopian landscapes of his novels into sound. They are both a tribute and a challenge, asking us to reflect on what it means to be human in a world where technology has permeated every corner of our existence. Just as Gibson envisioned a future where humanity and machines converge, the artists in this compilation fuse organic and synthetic sounds, analog and digital techniques, to evoke the tensions of the world he foretold. Curated and mastered by Raffaele Pezzella (Sonologyst). Layout by Matteo Mariano. Cat. Num. USG105. Unexplained Sounds Network labels: https://unexplainedsoundsgroup.bandca… https://eighthtowerrecords.bandcamp.comhttps://sonologyst.bandcamp.comhttps://therecognitiontest.bandcamp.comhttps://zerok.bandcamp.comhttps://reversealignment.bandcamp.com Magazine and radio (Music, Fiction, Modern Mythologies) / eighthtower Please subscribe the channel to help us to create new music and videos. Great thanks to the patrons and followers for supporting and sustain the creative work we’re doing. Facebook: / unexplaineds… Instagram: / unexplained… Twitter: / sonologyst.
Cortical and subcortical activity can be parsimoniously understood as resulting from excitations of fundamental, resonant modes of the brain’s geometry rather than from modes of complex interregional connectivity.
Tesla is preparing to launch an affordable vehicle and a robo-taxi service, highlighted by the upcoming Project Alicorn software update and the new Model Y long-range, aimed at enhancing user experience and meeting market demands ## ## Questions to inspire discussion ## Tesla’s New Affordable Vehicle.
🚗 Q: What are the key features of Tesla’s upcoming affordable vehicle? A: Expected to launch in first half of 2024, it will be a lower, more compact version of the Model Y, possibly a hatchback, with a starting price of $44,990 in the US.
🏎️ Q: How does the new rear-wheel drive Model Y compare to previous models? A: It offers 20 miles more range, faster 0–60 time, and all-new features like improved speakers and sound system, making it a bargain at $44,990. Robotaxi Functionality.
🤖 Q: What is Tesla’s robotaxi project called and what features will it have? A: Called Project Alicorn, it will allow users to confirm pickup, enter destination, fasten seatbelt, pullover, cancel pickup, and access emergency help.
📱 Q: What additional features are coming to the robotaxi app? A: Upcoming features include smart summon without continuous press, live activities, trip summary screen, ability to close the trunk, rate the ride, and access outside service area help.
🚕 Q: How might Tesla expand its robotaxi service to non-driverless markets? A: The app includes a “call driver” button, potentially allowing non-driverless markets to join the ride-share network, though this strategy is unclear. CyberCab Production.
Such questions quickly run into the limits of knowledge for both biology and computer science. To answer them, we need to figure out what exactly we mean by “information” and how that’s related to what’s happening inside cells. In attempting that, I will lead you through a frantic tour of information theory and molecular biology. We’ll meet some strange characters, including genomic compression algorithms based on deep learning, retrotransposons, and Kolmogorov complexity.
Ultimately, I’ll argue that the intuitive idea of information in a genome is best captured by a new definition of a “bit” — one that’s unknowable with our current level of scientific knowledge.
Scientists have discovered a way to use live tissue as a computational reservoir to solve problems and potentially predict chaotic systems like the weather.