Toggle light / dark theme

An Interview with COO Dijam Panigrahi.


“a unified and shared software infrastructure to empower enterprise customers to build and run scalable, high-quality eXtended Reality (XR) – Augmented Reality (AR), Virtual Reality (VR) and Mixed Reality (MR) – applications in public, private, and hybrid clouds.”

What does that all mean?

Simply, GridRaster creates spatial, high-fidelity maps of three-dimensional physical objects. So if you plan to build an automobile or aircraft, use the software to capture an image and create a detailed mesh model overlay that can be viewed using a VR headset. The mesh model can be shared with robots and other devices.

The state of VR.


Today we dive into a video that has been on my mind and in the words for a LONG time. For the first time I put together all of the world’s craziest VR hardware that anyone can purchase and recreated the setup found in Ready Player One’s books/ movie.

Super high resolution VR Headset: Varjo XR3

The talk is provided on a Free/Donation basis. If you would like to support my work then you can paypal me at this link:
https://paypal.me/wai69
Or to support me longer term Patreon me at: https://www.patreon.com/waihtsang.

Unfortunately my internet link went down in the second Q&A session at the end and the recording cut off. Shame, loads of great information came out about FPGA/ASIC implementations, AI for the VR/AR, C/C++ and a whole load of other riveting and most interesting techie stuff. But thankfully the main part of the talk was recorded.

TALK OVERVIEW
This talk is about the realization of the ideas behind the Fractal Brain theory and the unifying theory of life and intelligence discussed in the last Zoom talk, in the form of useful technology. The Startup at the End of Time will be the vehicle for the development and commercialization of a new generation of artificial intelligence (AI) and machine learning (ML) algorithms.

We will show in detail how the theoretical fractal brain/genome ideas lead to a whole new way of doing AI and ML that overcomes most of the central limitations of and problems associated with existing approaches. A compelling feature of this approach is that it is based on how neurons and brains actually work, unlike existing artificial neural networks, which though making sensational headlines are impeded by severe limitations and which are based on an out of date understanding of neurons form about 70 years ago. We hope to convince you that this new approach, really is the path to true AI.

High Dynamic Range Zuckerberg said that of the four key challenges he and Abbrash overviewed “the most important of these all is HDR.” To prove out the impact of HDR on the VR experience, the Display Systems Research team built another prototype, appropriately called Starburst. According to Meta it’s the first VR headset prototype (‘as …

In forthcoming years, everyone will get to observe how beautifully Metaverse will evolve towards immersive experiences in hyperreal virtual environments filled with avatars that look and sound exactly like us. Neil Stephenson’s Snow Crash describes a vast world full of amusement parks, houses, entertainment complexes, and worlds within themselves all connected by a virtual street tens of thousands of miles long. For those who are still not familiar with the metaverse, it is a virtual world in which users can put on virtual reality goggles and navigate a stylized version of themselves, known as an avatar, via virtual workplaces, and entertainment venues, and other activities. The metaverse will be an immersive version of the internet with interactive features using different technologies such as virtual reality (VR), augmented reality (AR), 3D graphics, 5G, hologram, NFT, blockchain, haptic sensors, and artificial intelligence (AI). To scale personalized content experiences to billions of people, one potential answer is generative AI, the process of using AI algorithms on existing data to create new content.

In computing, procedural generation is a method of creating data algorithmically as opposed to manually, typically through a combination of human-generated assets and algorithms coupled with computer-generated randomness and processing power. In computer graphics, it is commonly used to create textures and 3D models.

The algorithmic difficulty is typically seen in Diablo-style RPGs and some roguelikes which use instancing of in-game entities to create randomized items. Less frequently it can be used to determine the relative difficulty of hand-designed content to be subsequently placed procedurally, as can be seen with the monster design in Unangband. For example, the designer can rapidly create content, but leaves it up to the game to determine how challenging that content is to overcome, and consequently where in the procedurally generated environment this content will appear. Notably, the Touhou series of bullet hell shooters use algorithmic difficulty. Though the users are only allowed to choose certain difficulty values, several community mods enable ramping the difficulty beyond the offered values.

There’s a lot going on when it comes to Apple’s rumored mixed reality headset, which is expected to combine both AR and VR technologies into a single device. However, at the same time, the company has also been working on new AR glasses. According to Haitong Intl Tech Research analyst Jeff Pu, Apple’s AR glasses will be announced in late 2024.

In a note seen by 9to5Mac, Pu mentions that Luxshare will remain as one of Apple’s main suppliers for devices to come between late 2022 and 2024. Among all devices, the analyst highlights products such as Apple Watch Series 8, iPhone 14, and Apple’s AR/VR headset. But more than that, Pu believes that Apple plans to introduce new AR glasses in the second half of 2024.

At this point, details about Apple’s AR glasses are unknown. What we do know so far is that, unlike Apple’s AR/VR headset, the new AR glasses will be highly dependent on the iPhone due to design limitations. Analyst Ming-Chi Kuo said in 2019 that the rumored “Apple Glasses” will act more like a display for the iPhone, similar to the first generation Apple Watch.

Researchers at Meta Reality Labs are reporting that their work on Codec Avatars 2.0 has reached a level where the avatars are approaching complete realism. The researchers created a prototype Virtual Reality headset that has a custom-built accelerator chip specifically designed to manage the AI processing capable of rendering Meta’s photorealistic Codec Avatars on standalone virtual reality headsets.

The prototype Virtual Reality avatars use very advanced machine learning techniques.

Meta first showcased the work on the sophisticated Codec Avatars far back in March 2019. The avatars are powered using multiple neural networks and are generated via a special capture rig that contains 171 cameras. After the avatars are generated, they are powered in real-time through a prototype virtual reality headset that has five cameras. Two cameras are internal viewing each eye while three are external viewing the lower face. It is though that such advanced and photoreal avatars may one day replace video conferencing.

Remotely operated, non-lethal drones key in long-term plan to detect and stop mass shootings in less than 60 seconds

SCOTTSDALE, Ariz. 0, June 2, 2022 /PRNewswire/ — Axon (NASDAQ: AXON), the global leader in connected public safety technologies, today announced it has formally begun development of a non-lethal, remotely-operated TASER drone system as part of a long-term plan to stop mass shootings, and reaffirmed it is committed to public engagement and dialogue during the development process. This includes accelerating detection and improving real-time situational awareness of active shooter events, enhancing first responder effectiveness through VR training, and deploying remotely operated non-lethal drones capable of incapacitating an active shooter in less than 60 seconds.

Predicting it now. 2030s there will be Tons of this, and not just chat bots of dead people, but making them seem alive, 24/7 in VR world meta whatever. There will probably be shops that cater to this and try and make it as close and realistic as possible, will probably mostly be underground.


The recent case of a man making a chatbot based on his deceased fiancée raises ethical questions: Is this something we want? property= description.

In the not-too-distant future, many of us may routinely use 3D headsets to interact in the metaverse with virtual iterations of companies, friends, and life-like company assistants. These may include Lily from AT&T, Flo from Progressive, Jake from State Farm, and the Swami from CarShield. We’ll also be interacting with new friends like Nestlé‘s Cookie Coach, Ruth, the World Health Organization’s Digital Health worker Florence, and many others.

Creating digital characters for virtual reality apps and in ecommerce is a fast-rising new segment of IT. San Francisco-based Soul Machines, a company that is rooted in both the animation and artificial intelligence (AI) sectors, is jumping at the opportunity to create animated digital avatars to bolster interactions in the metaverse. Customers are much more likely to buy something when a familiar face — digital or human — is involved.

Investors, understandably, are hot on the idea. This week, the 6-year-old company revealed an infusion of series B financing ($70 million) led by new investor SoftBank Vision Fund 2, bringing the company’s total funding to $135 million to date.