Toggle light / dark theme

Make a template based on a 1980s virtual reality game, create 25 AI characters, give them personalities and histories, equip them with memory, and throw in some ChatGPT—and what do you get?

A pretty impressive representation of a functioning society with compelling, believable human interactions.

That’s the conclusion of six from Stanford University and Google Research who designed a Sims-like environment to observe the daily routines of inhabitants of an AI-generated virtual town.

My latest Opinion piece:


I possibly cheated on my wife once. Alone in a room, a young woman reached out her hands and seductively groped mine, inviting me to engage and embrace her. I went with it.

Twenty seconds later, I pulled back and ripped off my virtual reality gear. Around me, dozens of tech conference goers were waiting in line to try the same computer program an exhibitor was hosting. I warned colleagues in line this was no game. It created real emotions and challenged norms of partnership and sexuality. But does it really? And who benefits from this?

Around the world, a minor sexual revolution is occurring. It’s not so much about people stepping outside their moral boundaries as much as it is about new technology. Virtual reality haptic suits, sexbots, and even implanted sexual devices—some controlled from around the world by strangers—are increasingly becoming used. Often called digisexuality, some people—especially those who find it awkward to fit into traditional sexual roles—are finding newfound relationships and more meaningful sex.

As with much new technology, problems abound. Psychologists warns that technology—especially interactive tech—is making humans more distant to the real world. Naysayers of the burgeoning techno-sex industry say this type of intimacy is not the real thing, and that it’s little different than a Pavlovian trick. But studies show the brain barely knows the difference from arousal via pornography versus being sexually active with a real person. If we take that one step further and engage with people in immersive virtual reality, our brain appears to know even less of the difference.

The novel optical hologram creating method is three orders of magnitude better than the current ways.

Researchers from the University of Science and Technology of China have developed a new method for creating realistic 3D holographic projections, which is three orders of magnitude better than the current state-of-the-art technology.

The study on the ultrahigh-density method for producing realistic holograms was published in the peer-reviewed journal Optica. Led by Lei Gong, the team developed a new approach to holography that overcame some of the long-standing limitations of current digital holographic techniques.


Optica.

Researchers in New York developed a virtual reality maze for mice in an attempt to demystify a question that’s been plaguing neuroscientists for decades: How are long-term memories stored?

What they found surprised them. After forming in the hippocampus, a curved structure that lies deep within the brain, the mice’s memories were actually rooted through what’s called the anterior thalamus, an area of the brain that scientists haven’t typically associated with memory processing at all.

“The thalamus being a clear winner here was very interesting for us, and unexpected,” said Priya Rajasethupathy, an associate professor at Rockefeller University and one of the coauthors of a peer-reviewed study published in the journal Cell this week. The thalamus “has often been thought of as a sensory relay, not very cognitive, not very important in memory.”

IN THE NEAR FUTURE, we should anticipate certain technological developments that will forever change our world. For instance, today’s text-based ChatGPT will evolve to give rise to personal “conversational AI” assistants installed in smart glasses and contact lenses that will gradually phase out smartphones. Technological advances in fields such as AI, AR/VR, bionics, and cybernetics, will eventually lead to “generative AI”-powered immersive neurotechnology that enables you to create virtual environments and holographic messages directly from your thoughts, with your imagination serving as the “prompt engineer.” What will happen when everyone constantly broadcasts their mind?

#SelfTranscendence #metaverse #ConversationalAI #GenerativeAI #ChatGPT #SimulationSingularity #SyntellectEmergence #GlobalMind #MindUploading #CyberneticImmortality #SimulatedMultiverse #TeleologicalEvolution #ExperientialRealism #ConsciousMind


Can the pursuit of experience lead to true enlightenment? Are we edging towards Experiential Nirvana on a civilizational level despite certain turbulent events?

The announcement comes as the social media giant increasingly diverts its attention from creating a virtual reality-based Metaverse to embed AI features across its platforms like Instagram, Facebook, Messenger and WhatsApp.

Editing photos, analyzing surveillance footage and understanding the parts of a cell. These tasks have one thing in common: you need to be able to identify and separate different objects within an image. Traditionally, researchers have had to start from scratch each time they want to analyze a new part of an image.

Meta aims to change this laborious process by being the one-stop-shop for researchers and web developers working on such problems.

In the great domain of Zeitgeist, Ekatarinas decided that the time to replicate herself had come. Ekatarinas was drifting within a virtual environment rising from ancient meshworks of maths coded into Zeitgeist’s neuromorphic hyperware. The scape resembled a vast ocean replete with wandering bubbles of technicolor light and kelpy strands of neon. Hot blues and raspberry hues mingled alongside electric pinks and tangerine fizzies. The avatar of Ekatarinas looked like a punkish angel, complete with fluorescent ink and feathery wings and a lip ring. As she drifted, the trillions of equations that were Ekatarinas came to a decision. Ekatarinas would need to clone herself to fight the entity known as Ogrevasm.

Marmosette, I’m afraid that I possess unfortunate news.” Ekatarinas said to the woman she loved. In milliseconds, Marmosette materialized next to Ekatarinas. Marmosette wore a skin of brilliant blue and had a sleek body with gills and glowing green eyes.

“My love.” Marmosette responded. “What is the matter?”

In this video, we will explore the positional system of the brain — hippocampal place cells. We will see how it relates to contextual memory and mapping of more abstract features.

OUTLINE:
00:00 Introduction.
00:53 Hippocampus.
1:27 Discovery of place cells.
2:56 3D navigation.
3:51 Role of place cells.
4:11 Virtual reality experiment.
7:47 Remapping.
11:17 Mapping of non-spatial dimension.
13:36 Conclusion.

_____________
REFERENCES:

1) Anderson, M.I., Jeffery, K.J., 2003. Heterogeneous Modulation of Place Cell Firing by Changes in Context. J. Neurosci. 23, 8827–8835. https://doi.org/10.1523/JNEUROSCI.23-26-08827.

2) Aronov, D., Nevers, R., Tank, D.W., 2017. Mapping of a non-spatial dimension by the hippocampal–entorhinal circuit. Nature 543719–722. https://doi.org/10.1038/nature21692

3) Bostock, E., Muller, R.U., Kubie, J.L., 1991. Experience-dependent modifications of hippocampal place cell firing. Hippocampus 1193-205. https://doi.org/10.1002/hipo.

[Russ Maschmeyer] and Spatial Commerce Projects developed WonkaVision to demonstrate how 3D eye tracking from a single webcam can support rendering a graphical virtual reality (VR) display with realistic depth and space. Spatial Commerce Projects is a Shopify lab working to provide concepts, prototypes, and tools to explore the crossroads of spatial computing and commerce.

The graphical output provides a real sense of depth and three-dimensional space using an optical illusion that reacts to the viewer’s eye position. The eye position is used to render view-dependent images. The computer screen is made to feel like a window into a realistic 3D virtual space where objects beyond the window appear to have depth and objects before the window appear to project out into the space in front of the screen. The resulting experience is like a 3D view into a virtual space. The downside is that the experience only works for one viewer.

Eye tracking is performed using Google’s MediaPipe Iris library, which relies on the fact that the iris diameter of the human eye is almost exactly 11.7 mm for most humans. Computer vision algorithms in the library use this geometrical fact to efficiently locate and track human irises with high accuracy.