Toggle light / dark theme

Christopher Nolan Recreated a Nuclear Weapon Explosion Without CGI, Developed New IMAX Film for ‘Oppenheimer’: ‘A Huge Challenge’

Christopher Nolan revealed to Total Film magazine that he recreated the first nuclear weapon detonation without CGI effects as part of the production for his new movie “Oppenehimer.” The film stars longtime Nolan collaborator Cillian Murphy as J. Robert Oppenheimer, a leading figure of the Manhattan Project and the creation the atomic bomb during World War II. Nolan has always favored practical effects over VFX (he even blew up a real Boeing 747 for “Tenet”), so it’s no surprise he went the practical route when it came time to film a nuclear weapon explosion.

“I think recreating the Trinity test [the first nuclear weapon detonation, in New Mexico] without the use of computer graphics was a huge challenge to take on,” Nolan said. “Andrew Jackson — my visual effects supervisor, I got him on board early on — was looking at how we could do a lot of the visual elements of the film practically, from representing quantum dynamics and quantum physics to the Trinity test itself, to recreating, with my team, Los Alamos up on a mesa in New Mexico in extraordinary weather, a lot of which was needed for the film, in terms of the very harsh conditions out there — there were huge practical challenges.”

Video streaming as polluting as driving? See the new calculations

Could video streaming be as bad for the climate as driving a car? Calculating Internet’s hidden carbon footprint.

We are used to thinking that going digital means going green. While that is true for some activities — for example, making a video call to the other side of the ocean is better than flying there — the situation is subtler in many other cases. For example, driving a small car to the movie theatre with a friend may have lower carbon emissions than streaming the same movie alone at home.

How do we reach this conclusion? Surprisingly, making these estimates is fairly complicated.


ATHVisions/iStock.

This is remarkable given that we have been able to estimate quite accurately phenomena that are much more complex. In this case, we would only need quantitative information – the electrical energy and the amount of data used – that can be determined with great accuracy. The current situation is not acceptable and should be addressed soon by policymakers.

Why OpenAI’s New ChatGPT Has People Panicking | New Humanoid AI Robots Technology

Deep Learning AI Specialization: https://imp.i384100.net/GET-STARTED
ChatGPT from Open AI has shocked many users as it is able to complete programming tasks from natural language descriptions, create legal contracts, automate tasks, translate languages, write articles, answer questions, make video games, carry out customer service tasks, and much more — all at the level of human intelligence with 99% percent of its outputs. PAL Robotics has taught its humanoid AI robots to use objects in the environment to avoid falling when losing balance.

AI News Timestamps:
0:00 Why OpenAI’s ChatGPT Has People Panicking.
3:29 New Humanoid AI Robots Technology.
8:20 Coursera Deep Learning AI

Twitter / Reddit Credits:
ChatGPT3 AR (Stijn Spanhove) https://bit.ly/3HmxPYm.
Roblox game made with ChatGPT3 (codegodzilla) https://bit.ly/3HkdXoY
ChatGPT3 making text to image prompts (Manu. Vision | Futuriste) https://bit.ly/3UyyKrG
ChatGPT3 for video game creation (u/apinanaivot) https://bit.ly/3VI17oI
ChatGPT3 making video game land (Lucas Ferreira da Silva) https://bit.ly/3iMdotO
ChatGPT3 deleting blender default cube (Blender Renaissance) https://bit.ly/3FcM3rZ
ChatGPT3 responding about Matrix (Mario Reder) https://bit.ly/3UIsX2K
ChatGPT3 to write acquisition rational for the board of directors (The Secret CFO) https://bit.ly/3BhmmW5
ChatGPT3 to get job offers (Leon Noel) https://bit.ly/3UFl3qT
Automated rpa with ChatGPT3 (Sahar Mor) https://bit.ly/3W1ZkKK
ChatGPT3 making 3D web designs (Avalon‱4) https://bit.ly/3UzGXf7
ChatGPT3 making a legal contract (Atri) https://bit.ly/3BljuYn.
ChatGPT3 making signup program (Chris Raroque) https://bit.ly/3Hrachc.

#technology #tech #ai

The Amazing Visuals of The Orville: New Horizons (Part 1 of 2)

There are many components that make The Orville: New Horizons a great show, and not the least of which are the beautiful visuals. Please enjoy this compilation of amazing shots from the first six episodes of the third season of The Orville. (Stay tuned for episodes 7–10 in Part 2.)

#RenewTheOrville.

If you’d like to help support the channel:
https://buymeacoffee.com/JohnDiMarco.
https://www.paypal.me/DarwinDiMarco.
Thank you!

0:00 — Intro/Electric Sheep.
2:37 — Shadow Realms.
3:27 — Mortality Paradox.
4:14 — Gently Falling Rain.
6:02 — A Tale of Two Topas.
6:55 — Twice in a Lifetime.
8:56 — Conclusion.

Thanks to Jitse Lemmens for my amazing avatar:
https://www.youtube.com/user/Mansemat156
Check out his portfolio: https://jitselemmens.com/

DeepMind’s new AI app plays Stratego at expert level

A team of researchers at DeepMind Technologies Ltd., has created an AI application called “DeepNash” that is able to play the game Stratego at an expert level. In their paper published in the journal Science, the group describes the unique approach they took to improve the app’s level of play.

Stratego is a two-player board game and is considered to be difficult to master. The goal for each player is to capture their opponent’s flag, which is hidden among their initial 40 game pieces. Each of the game pieces is marked with a power ranking—higher-ranked players defeat lower-ranked players in face-offs. Making the game more difficult is that neither player can see the markings on the opponent’s game pieces until they meet face-to-face.

Prior research has shown that the complexity of the game is higher than that of chess or go, with 10535 possible scenarios. This level of complexity makes it extremely challenging for computer experts attempting to create Stratego-playing AI systems. In this new effort, the researchers took a different approach, creating an app capable of beating most human and other AI systems.

Mastering Stratego, the classic game of imperfect information

Game-playing artificial intelligence (AI) systems have advanced to a new frontier. Stratego, the classic board game that’s more complex than chess and Go, and craftier than poker, has now been mastered. Published in Science, we present DeepNash, an AI agent that learned the game from scratch to a human expert level by playing against itself.

DeepNash uses a novel approach, based on game theory and model-free deep reinforcement learning. Its play style converges to a Nash equilibrium, which means its play is very hard for an opponent to exploit. So hard, in fact, that DeepNash has reached an all-time top-three ranking among human experts on the world’s biggest online Stratego platform, Gravon.

Board games have historically been a measure of progress in the field of AI, allowing us to study how humans and machines develop and execute strategies in a controlled environment. Unlike chess and Go, Stratego is a game of imperfect information: players cannot directly observe the identities of their opponent’s pieces.

In reinforcement learning, slower networks can learn faster

We then tested the new algorithms, called DQN with Proximal updates (or DQN Pro) and Rainbow Pro on a standard set of 55 Atari games. We can see from the graph of the results that the Pro agents overperform their counterparts; the basic DQN agent is able to obtain human-level performance after 120 million interactions with the environment (frames); and Rainbow Pro achieves a 40% relative improvement over the original Rainbow agent.

Further, to ensure that proximal updates do in fact result in smoother and slower parameter changes, we measure the norm differences between consecutive DQN solutions. We expect the magnitude of our updates to be smaller when using proximal updates. In the graphs below, we confirm this expectation on the four different Atari games tested.

Overall, our empirical and theoretical results support the claim that when optimizing for a new solution in deep RL, it is beneficial for the optimizer to gravitate toward the previous solution. More importantly, we see that simple improvements in deep-RL optimization can lead to significant positive gains in the agent’s performance. We take this as evidence that further exploration of optimization algorithms in deep RL would be fruitful.

Deepmind’s new video game AIs learn from humans

Deepmind introduces a new research framework for AI agents in simulated environments such as video games that can interact more flexibly and naturally with humans.

AI systems have achieved great success in video games such as Dota or Starcraft, defeating human professional players. This is made possible by precise reward functions that are tuned to optimize game outcomes: Agents were trained using unique wins and losses calculated by computer code. Where such reward functions are possible, AI agents can sometimes achieve superhuman performance.

But often – especially for everyday human behaviors with open-ended outcomes – there is no such precise reward function.