Toggle light / dark theme

Many of us are guilty of giving up on a game because of a long grind or seemingly insurmountable challenge, but Sony’s latest patent shows the company wants to change that.

Some masochists enjoy the pain of being repeatedly beaten in games like Dark Souls – it’s understandable, overcoming those challenges is a great feeling – but most of us have a breaking point where the power button gets hit and the game just ends up collecting dust on a shelf.

Carmack, known for his work in VR and on classic games like Doom and Quake, is stepping down from his consulting CTO role at Meta.

John Carmack, a titan of the technology industry known for his work on virtual reality as well as classic games like Doom.

Carmack originally joined Oculus as CTO in 2013, after helping to promote the original Oculus Rift prototypes that he received from Palmer Luckey, and got pulled into Meta when the company (then Facebook) acquired Oculus in 2014.


“We built something pretty close to the right thing,” Carmack wrote about the Quest 2. He also said that he “wearied of the fight” with Meta, which is burning billions in its Reality Labs division to build things like VR headsets and software for its vision of the metaverse. Carmack would also write internal posts criticizing CEO Mark Zuckerberg and CTO Andrew Bosworth’s decision making while at Meta, The New York Times reported.

Bosworth, in a tweet thanking Carmack on Friday evening, said that it is “impossible to overstate the impact you’ve had on our work and the industry as a whole. Your technical prowess is widely known, but it is your relentless focus on creating value for people that we will remember most.”

This isn’t not the first time Carmack has been unhappy with Meta’s priorities for VR. The company also killed off his mobile efforts with the Samsung Gear VR — “we missed an opportunity,” he said at the time — and the low-cost Oculus Go, both of which were his projects.

Human-like articulated neural avatars have several uses in telepresence, animation, and visual content production. These neural avatars must be simple to create, simple to animate in new stances and views, capable of rendering in photorealistic picture quality, and simple to relight in novel situations if they are to be widely adopted. Existing techniques frequently use monocular films to teach these neural avatars. While the method permits movement and photorealistic image quality, the synthesized images are constantly constrained by the training video’s lighting conditions. Other studies specifically address the relighting of human avatars. However, they do not provide the user control over the body stance. Additionally, these methods frequently need multiview photos captured in a Light Stage for training, which is only permitted in controlled environments.

Some contemporary techniques seek to relight dynamic human beings in RGB movies. However, they lack control over body posture. They need a brief monocular video clip of the person in their natural location, attire, and body stance to produce an avatar. Only the target novel’s body stance and illumination information are needed for inference. It is difficult to learn relightable neural avatars of active individuals from monocular RGB films captured in unfamiliar surroundings. Here, they introduce the Relightable Articulated Neural Avatar (RANA) technique, which enables photorealistic human animation in any new body posture, perspective, and lighting situation. It first needs to simulate the intricate articulations and geometry of the human body.

The texture, geometry, and illumination information must be separated to enable relighting in new contexts, which is a difficult challenge to tackle from RGB footage. To overcome these difficulties, they first use a statistical human shape model called SMPL+D to extract canonical, coarse geometry, and texture data from the training frames. Then, they suggest a unique convolutional neural network trained on artificial data to exclude the shading information from the coarse texture. They add learnable latent characteristics to the coarse geometry and texture and send them to their proposed neural avatar architecture, which uses two convolutional networks to produce fine normal and albedo maps of the person underneath the goal body posture.

😗


Japan sold a fully-functional robot suit that doesn’t just resemble a combat mech from a video game, it was able to actually fight like one too. With video games like “Armored Core,” mech fans already have a good idea of what giant fighting robots would be like in real life. One such fan is Kogoro Kurata –- an artist who wasn’t satisfied with leaving that notion to the imagination. Kurata explained that driving these giant robots was his dream, saying it was something the “Japanese had to do” (via Reuters). Unlike some of the coolest modern robots available today, Kurata envisioned a full-sized mech he can actually enter and operate: Something he actually accomplished in 2012.

Christopher Nolan revealed to Total Film magazine that he recreated the first nuclear weapon detonation without CGI effects as part of the production for his new movie “Oppenehimer.” The film stars longtime Nolan collaborator Cillian Murphy as J. Robert Oppenheimer, a leading figure of the Manhattan Project and the creation the atomic bomb during World War II. Nolan has always favored practical effects over VFX (he even blew up a real Boeing 747 for “Tenet”), so it’s no surprise he went the practical route when it came time to film a nuclear weapon explosion.

“I think recreating the Trinity test [the first nuclear weapon detonation, in New Mexico] without the use of computer graphics was a huge challenge to take on,” Nolan said. “Andrew Jackson — my visual effects supervisor, I got him on board early on — was looking at how we could do a lot of the visual elements of the film practically, from representing quantum dynamics and quantum physics to the Trinity test itself, to recreating, with my team, Los Alamos up on a mesa in New Mexico in extraordinary weather, a lot of which was needed for the film, in terms of the very harsh conditions out there — there were huge practical challenges.”

Could video streaming be as bad for the climate as driving a car? Calculating Internet’s hidden carbon footprint.

We are used to thinking that going digital means going green. While that is true for some activities — for example, making a video call to the other side of the ocean is better than flying there — the situation is subtler in many other cases. For example, driving a small car to the movie theatre with a friend may have lower carbon emissions than streaming the same movie alone at home.

How do we reach this conclusion? Surprisingly, making these estimates is fairly complicated.


ATHVisions/iStock.

This is remarkable given that we have been able to estimate quite accurately phenomena that are much more complex. In this case, we would only need quantitative information – the electrical energy and the amount of data used – that can be determined with great accuracy. The current situation is not acceptable and should be addressed soon by policymakers.

Deep Learning AI Specialization: https://imp.i384100.net/GET-STARTED
ChatGPT from Open AI has shocked many users as it is able to complete programming tasks from natural language descriptions, create legal contracts, automate tasks, translate languages, write articles, answer questions, make video games, carry out customer service tasks, and much more — all at the level of human intelligence with 99% percent of its outputs. PAL Robotics has taught its humanoid AI robots to use objects in the environment to avoid falling when losing balance.

AI News Timestamps:
0:00 Why OpenAI’s ChatGPT Has People Panicking.
3:29 New Humanoid AI Robots Technology.
8:20 Coursera Deep Learning AI

Twitter / Reddit Credits:
ChatGPT3 AR (Stijn Spanhove) https://bit.ly/3HmxPYm.
Roblox game made with ChatGPT3 (codegodzilla) https://bit.ly/3HkdXoY
ChatGPT3 making text to image prompts (Manu. Vision | Futuriste) https://bit.ly/3UyyKrG
ChatGPT3 for video game creation (u/apinanaivot) https://bit.ly/3VI17oI
ChatGPT3 making video game land (Lucas Ferreira da Silva) https://bit.ly/3iMdotO
ChatGPT3 deleting blender default cube (Blender Renaissance) https://bit.ly/3FcM3rZ
ChatGPT3 responding about Matrix (Mario Reder) https://bit.ly/3UIsX2K
ChatGPT3 to write acquisition rational for the board of directors (The Secret CFO) https://bit.ly/3BhmmW5
ChatGPT3 to get job offers (Leon Noel) https://bit.ly/3UFl3qT
Automated rpa with ChatGPT3 (Sahar Mor) https://bit.ly/3W1ZkKK
ChatGPT3 making 3D web designs (Avalon•4) https://bit.ly/3UzGXf7
ChatGPT3 making a legal contract (Atri) https://bit.ly/3BljuYn.
ChatGPT3 making signup program (Chris Raroque) https://bit.ly/3Hrachc.

#technology #tech #ai