Shaking hands with a character from the Fortnite video game. Visualizing a patient’s heart in 3D—and “feeling” it beat. Touching the walls of the Roman Coliseum—from your sofa in Los Angeles. What if we could touch and interact with things that aren’t physically in front of us? This reality might be closer than we think, thanks to an emerging technology: the holodeck.
The name might sound familiar. In Star Trek’s Next Generation, a holodeck was an advanced 3D virtual reality world that created the illusion of solid objects. Now, immersive technology researchers at USC and beyond are taking us one step closer to making this science fiction concept a science fact.
On Dec. 15, USC hosted the first International Conference on Holodecks. Organized by Shahram Ghandeharizadeh, a USC associate professor of computer science, the conference featured keynotes, papers and presentations from researchers at USC, Brown University, UCLA, University of Colorado, Stanford University, New Jersey Institute of Technology, UC-Riverside, and haptic technology company UltraLeap.
Wetware computing and organoid intelligence is an emerging research field at the intersection of electrophysiology and artificial intelligence. The core concept involves using living neurons to perform computations, similar to how Artificial Neural Networks (ANNs) are used today. However, unlike ANNs, where updating digital tensors (weights) can instantly modify network responses, entirely new methods must be developed for neural networks using biological neurons. Discovering these methods is challenging and requires a system capable of conducting numerous experiments, ideally accessible to researchers worldwide. For this reason, we developed a hardware and software system that allows for electrophysiological experiments on an unmatched scale. The Neuroplatform enables researchers to run experiments on neural organoids with a lifetime of even more than 100 days. To do so, we streamlined the experimental process to quickly produce new organoids, monitor action potentials 24/7, and provide electrical stimulations. We also designed a microfluidic system that allows for fully automated medium flow and change, thus reducing the disruptions by physical interventions in the incubator and ensuring stable environmental conditions. Over the past three years, the Neuroplatform was utilized with over 1,000 brain organoids, enabling the collection of more than 18 terabytes of data. A dedicated Application Programming Interface (API) has been developed to conduct remote research directly via our Python library or using interactive compute such as Jupyter Notebooks. In addition to electrophysiological operations, our API also controls pumps, digital cameras and UV lights for molecule uncaging. This allows for the execution of complex 24/7 experiments, including closed-loop strategies and processing using the latest deep learning or reinforcement learning libraries. Furthermore, the infrastructure supports entirely remote use. Currently in 2024, the system is freely available for research purposes, and numerous research groups have begun using it for their experiments. This article outlines the system’s architecture and provides specific examples of experiments and results.
The recent rise in wetware computing and consequently, artificial biological neural networks (BNNs), comes at a time when Artificial Neural Networks (ANNs) are more sophisticated than ever.
The latest generation of Large Language Models (LLMs), such as Meta’s Llama 2 or OpenAI’s GPT-4, fundamentally rely on ANNs.
Adeno-associated virus (AAV) is a well-known gene delivery tool with a wide range of applications, including as a vector for gene therapies. However, the molecular mechanism of its cell entry remains unknown. Here, we performed coarse-grained molecular dynamics simulations of the AAV serotype 2 (AAV2) capsid and the universal AAV receptor (AAVR) in a model plasma membrane environment. Our simulations show that binding of the AAV2 capsid to the membrane induces membrane curvature, along with the recruitment and clustering of GM3 lipids around the AAV2 capsid. We also found that the AAVR binds to the AAV2 capsid at the VR-I loops using its PKD2 and PKD3 domains, whose binding poses differs from previous structural studies. These first molecular-level insights into AAV2 membrane interactions suggest a complex process during the initial phase of AAV2 capsid internalization.
Disney is adding another layer to its AI and extended reality strategies. As first reported by Reuters, the company recently formed a dedicated emerging technologies unit. Dubbed the Office of Technology Enablement, the group will coordinate the company’s exploration, adoption and use of artificial intelligence, AR and VR tech.
It has tapped Jamie Voris, previously the CTO of its Studios Technology division, to oversee the effort. Before joining Disney in 2010, Voris was the chief technology officer at the National Football League. More recently, he led the development of the company’s Apple Vision Proapp. Voris will report to Alan Bergman, the co-chairman of Disney Entertainment. Reuters reports the company eventually plans to grow the group to about 100 employees.
“The pace and scope of advances in AI and XR are profound and will continue to impact consumer experiences, creative endeavors, and our business for years to come — making it critical that Disney explore the exciting opportunities and navigate the potential risks,” Bergman wrote in an email Disney shared with Engadget. “The creation of this new group underscores our dedication to doing that and to being a positive force in shaping responsible use and best practices.”
Researchers have developed a new type of bifocal lens that offers a simple way to achieve two foci (or spots) with intensities that can be adjusted by applying external voltage. The lenses, which use two layers of liquid crystal structures, could be useful for various applications such as optical interconnections, biological imaging, augmented/virtual reality devices and optical computing.
A virtual haptic implementation technology that allows all users to experience the same tactile sensation has been developed. A research team led by Professor Park Jang-Ung from the Center for Nanomedicine within the Institute for Basic Science (IBS) and Professor Jung Hyun Ho from Severance Hospital’s Department of Neurosurgery has developed a technology that provides consistent tactile sensations on displays.
This research was conducted in collaboration with colleagues from Yonsei University Severance Hospital. It was published in Nature Communications on August 21, 2024.
Virtual haptic implementation technology, also known as tactile rendering technology, refers to the methods and systems that simulate the sense of touch in a virtual environment. This technology aims to create the sensation of physical contact with virtual objects, enabling users to feel textures, shapes, and forces as if they were interacting with real-world items, even though the objects are digital.
Researchers at the University of Toronto have found that using virtual and augmented reality (VR and AR) can temporarily change the way people perceive and interact with the real world—with potential implications for the growing number of industries that use these technologies for training purposes.
The study, published recently in the journal Scientific Reports, not only found that people moved differently in VR and AR, but that these changes led to temporary errors in movement in the real world. In particular, participants who used VR tended to undershoot their targets by not reaching far enough, while those who used AR tended to overshoot their targets by reaching too far.
This effect was noticeable immediately after using VR or AR, but gradually disappeared as participants readjusted to real-world conditions.
Optical fiber, as a physical medium for information transmission, is the “highway” of modern economic and social development. However, with the continuous emergence of high-speed and high-capacity communication scenarios such as virtual reality, 5G, intelligent driving, and the Internet of Things (IoT), there is an upper limit to the communication capacity (traffic flow) of the traditional single-mode fiber-optic communication system (highway).
What does the future hold? What will become of this planet and its inhabitants in the centuries to come? We are living in a historical period that sometimes feels like the prelude to something truly remarkable or terribly dire about to unfold. This captivating video seeks to decipher the signs and attempt to construct plausible scenarios from the nearly nothing we hold in our hands today. As always, it will be scientific discoveries leading the dance of change, while philosophers, writers, politicians, and all the others will have the seemingly trivial task of containing, describing, and guiding. Before embarking on our journey through time, let me state the obvious: No one knows the future! Numerous micro and macro factors could alter this trajectory—world wars, pandemics, unimaginable social shifts, or climate disasters. Nevertheless, we’re setting off. And we’re doing so by discussing the remaining decades of the century we’re experiencing right now.
- DISCUSSIONS \& SOCIAL MEDIA
Commercial Purposes: [email protected]. Tik Tok: / insanecuriosity. Reddit: / insanecuriosity. Instagram: / insanecuriositythereal. Twitter: / insanecurio. Facebook: / insanecuriosity. Linkedin: / insane-curiosity-46b928277 Our Website: https://insanecuriosity.com/ – Credits: Ron Miller, Mark A. Garlick / MarkGarlick.com, Elon Musk/SpaceX/ Flickr. – 00:00 Intro. 01:20 Artificial Intelligence. 02:40 2030 The ELT telescope. 03:20 2031 The International Space Station is deorbited. 04:05 2035 The cons. 04:45 2036 Humans landed on mars. 05:05 2037. The global population reaches 9 billion. 05:57 2038 2038. Airplane accident casualties = 0 06:20 Fusion power is nearing commercial availability. 07:01 2042 Supercomputers. 07:30 2045 turning point for human-artificial intelligence interactions. 08:58 2051 Establishment of the first permanent lunar base. 09:25 2067 The first generation of antimatter-powered spacecraft emerging. 10:07 2080 Autonomous vehicles dominate the streets. 10:35 2090 Religion is fading from European culture. 10:55 2099 Consideration of Mars terraforming. 11:28 22nd century Moon and Mars Settlements. 12:10 2,130 transhumanism. 12:41 2,132 world records are shattered. 12:57 2,137 a space elevator. 14:32 2,170 By this year, there are dozens of human settlements on the Moon. 15:18 2180 16:18 23rd century Immortality. 16:49 2,230 Hi-Tech and Automated Cities. 17:23 2,310 23rd Century: Virtual Reality and Immortality. 18:01 2,320 antimatter-powered propulsion. 18:40 2,500 Terraforming Mars Abandoned. 19:05 2,600 Plastic Cleanup. 19:25 2,800 Silent Probes. 19:37 3,100 Humanity as a Type 2 Civilization. – #insanecuriosity #timelapseofthefuture #futuretime
The hippocampus geometrically represents both physical location and abstract variables on a neural manifold in mice performing a decision-making task in virtual reality.