Toggle light / dark theme

Deep Learning AI Specialization: https://imp.i384100.net/GET-STARTED
ChatGPT from Open AI has shocked many users as it is able to complete programming tasks from natural language descriptions, create legal contracts, automate tasks, translate languages, write articles, answer questions, make video games, carry out customer service tasks, and much more — all at the level of human intelligence with 99% percent of its outputs. PAL Robotics has taught its humanoid AI robots to use objects in the environment to avoid falling when losing balance.

AI News Timestamps:
0:00 Why OpenAI’s ChatGPT Has People Panicking.
3:29 New Humanoid AI Robots Technology.
8:20 Coursera Deep Learning AI

Twitter / Reddit Credits:
ChatGPT3 AR (Stijn Spanhove) https://bit.ly/3HmxPYm.
Roblox game made with ChatGPT3 (codegodzilla) https://bit.ly/3HkdXoY
ChatGPT3 making text to image prompts (Manu. Vision | Futuriste) https://bit.ly/3UyyKrG
ChatGPT3 for video game creation (u/apinanaivot) https://bit.ly/3VI17oI
ChatGPT3 making video game land (Lucas Ferreira da Silva) https://bit.ly/3iMdotO
ChatGPT3 deleting blender default cube (Blender Renaissance) https://bit.ly/3FcM3rZ
ChatGPT3 responding about Matrix (Mario Reder) https://bit.ly/3UIsX2K
ChatGPT3 to write acquisition rational for the board of directors (The Secret CFO) https://bit.ly/3BhmmW5
ChatGPT3 to get job offers (Leon Noel) https://bit.ly/3UFl3qT
Automated rpa with ChatGPT3 (Sahar Mor) https://bit.ly/3W1ZkKK
ChatGPT3 making 3D web designs (Avalon‱4) https://bit.ly/3UzGXf7
ChatGPT3 making a legal contract (Atri) https://bit.ly/3BljuYn.
ChatGPT3 making signup program (Chris Raroque) https://bit.ly/3Hrachc.

#technology #tech #ai

There are many components that make The Orville: New Horizons a great show, and not the least of which are the beautiful visuals. Please enjoy this compilation of amazing shots from the first six episodes of the third season of The Orville. (Stay tuned for episodes 7–10 in Part 2.)

#RenewTheOrville.

If you’d like to help support the channel:
https://buymeacoffee.com/JohnDiMarco.
https://www.paypal.me/DarwinDiMarco.
Thank you!

0:00 — Intro/Electric Sheep.

A team of researchers at DeepMind Technologies Ltd., has created an AI application called “DeepNash” that is able to play the game Stratego at an expert level. In their paper published in the journal Science, the group describes the unique approach they took to improve the app’s level of play.

Stratego is a two-player board game and is considered to be difficult to master. The goal for each player is to capture their opponent’s flag, which is hidden among their initial 40 game pieces. Each of the game pieces is marked with a power ranking—higher-ranked players defeat lower-ranked players in face-offs. Making the game more difficult is that neither player can see the markings on the opponent’s game pieces until they meet face-to-face.

Prior research has shown that the complexity of the game is higher than that of chess or go, with 10535 possible scenarios. This level of complexity makes it extremely challenging for computer experts attempting to create Stratego-playing AI systems. In this new effort, the researchers took a different approach, creating an app capable of beating most human and other AI systems.

Game-playing artificial intelligence (AI) systems have advanced to a new frontier. Stratego, the classic board game that’s more complex than chess and Go, and craftier than poker, has now been mastered. Published in Science, we present DeepNash, an AI agent that learned the game from scratch to a human expert level by playing against itself.

DeepNash uses a novel approach, based on game theory and model-free deep reinforcement learning. Its play style converges to a Nash equilibrium, which means its play is very hard for an opponent to exploit. So hard, in fact, that DeepNash has reached an all-time top-three ranking among human experts on the world’s biggest online Stratego platform, Gravon.

Board games have historically been a measure of progress in the field of AI, allowing us to study how humans and machines develop and execute strategies in a controlled environment. Unlike chess and Go, Stratego is a game of imperfect information: players cannot directly observe the identities of their opponent’s pieces.

We then tested the new algorithms, called DQN with Proximal updates (or DQN Pro) and Rainbow Pro on a standard set of 55 Atari games. We can see from the graph of the results that the Pro agents overperform their counterparts; the basic DQN agent is able to obtain human-level performance after 120 million interactions with the environment (frames); and Rainbow Pro achieves a 40% relative improvement over the original Rainbow agent.

Further, to ensure that proximal updates do in fact result in smoother and slower parameter changes, we measure the norm differences between consecutive DQN solutions. We expect the magnitude of our updates to be smaller when using proximal updates. In the graphs below, we confirm this expectation on the four different Atari games tested.

Overall, our empirical and theoretical results support the claim that when optimizing for a new solution in deep RL, it is beneficial for the optimizer to gravitate toward the previous solution. More importantly, we see that simple improvements in deep-RL optimization can lead to significant positive gains in the agent’s performance. We take this as evidence that further exploration of optimization algorithms in deep RL would be fruitful.

Deepmind introduces a new research framework for AI agents in simulated environments such as video games that can interact more flexibly and naturally with humans.

AI systems have achieved great success in video games such as Dota or Starcraft, defeating human professional players. This is made possible by precise reward functions that are tuned to optimize game outcomes: Agents were trained using unique wins and losses calculated by computer code. Where such reward functions are possible, AI agents can sometimes achieve superhuman performance.

But often – especially for everyday human behaviors with open-ended outcomes – there is no such precise reward function.

A study in a virtual reality environment found that action video game players have better implicit temporal skills than non-gamers. They are better at preparing to time their reactions in tasks that require quick reactions and they do it automatically, without consciously working on it. The paper was published in Communications Biology.

Many research studies have shown that playing video games enhances cognition. These include increased ability to learn on the fly and improved control of attention. The extent of these improvements is unclear and it also depends on gameplay.

Success in action video games depends on the players’ skill in making precise responses at just the right time. Players benefit from practice during which they refine their time-related expectations of in-game developments, even when they are unaware of it. This largely unconscious process of processing time and preparing to react in a timely manner based on expectations of how the situation the person is in will develop is called incidental temporal processing.

An artificial intelligence (AI) agent named CICERO has mastered the online board game of Diplomacy. This is according to a new study by the Meta Fundamental AI Research Diplomacy Team (FAIR) that will be published today (November 22) in the journal Science.

AI has already been successful at playing competitive games like chess and Go which can be learned using only self-play training. However, games like Diplomacy, which require natural language negotiation, cooperation, and competition between multiple players, have been challenging.

The new agent developed by FAIR is not only capable of imitating natural language, but more importantly, it also analyzes some of the goals, beliefs, and intentions of its human partners in the game. It uses that information to figure out a plan of action that accounts for aligned and competing interests, and to communicate that plan in natural language, the researchers say.

Deep Learning AI Specialization: https://imp.i384100.net/GET-STARTED
Nvidia unveils its new artificial intelligence 3D model maker for game design uses text or photo input to output a 3D mesh and can also edit and adjust 3D models with text descriptions. New video style transfer from Nvidia uses CLIP to convert the style of 3D models and photos. New differential equation-based neural network machine learning AI from MIT solves brain dynamics.

AI News Timestamps:
0:00 Nvidia AI Turns Text To 3D Model Better Than Google.
2:03 Nvidia 3D Object Style Transfer AI
4:56 New Machine Learning AI From MIT

#nvidia #ai #3D