Toggle light / dark theme

Ok, that was an unexpected turn on my feed. Just had to share. Cool, portable robot that fits in a backpack.


Conquer the Wild | LimX Dynamics’ Biped Robot P1 ventured into Tanglang Mountain Based on Reinforcement Learning ⛰️

⛳️ With Zero-shot Learning, non-protected and fully open testing conditions, P1 successfully navigated the completely strange wilderness of the forest, demonstrating exceptional control and stability post reinforcement learning by dynamically locomoting over various complex terrains.

The use of artificial intelligence in the development of video games has been met with both excitement and dread.

According to a recent industry report by game engine developer Unity, studios are already using AI to save time and boost productivity by whipping up assets and code.

But given enough time, the video games of the future could soon be entirely created with the use of AI — maybe even within just ten years, according to Nvidia CEO Jensen Huang, the man behind a company that’s greatly benefitting from selling thousands of graphics processing units (GPUs) to some of the biggest players in the AI industry.

Using artificial intelligence, researchers have discovered mysterious “fairy circles” in hundreds of locations across the globe.

These unusual round vegetation patterns have long puzzled experts, dotting the landscapes in the Namib Desert and the Australian outback.

But according to a new study published in the journal Proceedings of the National Academy of Sciences, the unusual phenomenon could be far more widespread than previously thought, cracking the case wide open and raising plenty more questions than answers.

😗😁😘 agi yay 😀 😍


The pursuit of artificial intelligence that can navigate and comprehend the intricacies of three-dimensional environments with the ease and adaptability of humans has long been a frontier in technology. At the heart of this exploration is the ambition to create AI agents that not only perceive their surroundings but also follow complex instructions articulated in the language of their human creators. Researchers are pushing the boundaries of what AI can achieve by bridging the gap between abstract verbal commands and concrete actions within digital worlds.

Researchers from Google DeepMind and the University of British Columbia focus on a groundbreaking AI framework, the Scalable, Instructable, Multiworld Agent (SIMA). This framework is not just another AI tool but a unique system designed to train AI agents in diverse simulated 3D environments, from meticulously designed research labs to the expansive realms of commercial video games. Its universal applicability sets SIMA apart, enabling it to understand and act upon instructions in any virtual setting, a feature that could revolutionize how everyone interacts with AI.

Creating a versatile AI that can interpret and act on instructions in natural language is no small feat. Earlier AI systems were trained in specific environments, which limits their usefulness in new situations. This is where SIMA steps in with its innovative approach. Training in various virtual settings allows SIMA to understand and execute multiple tasks, linking linguistic instructions with appropriate actions. This enhances its adaptability and deepens its understanding of language in the context of different 3D spaces, a significant step forward in AI development.

Long before Archimedes suggested that all phenomena observable to us might be understandable through fundamental principles, humans have imagined the possibility of a theory of everything. Over the past century, physicists have edged nearer to unraveling this mystery. Albert Einstein’s theory of general relativity provides a solid basis for comprehending the cosmos at a large scale, while quantum mechanics allows us to grasp its workings at the subatomic level. The trouble is that the two systems don’t agree on how gravity works.

Today, artificial intelligence offers new hope for scientists addressing the massive computational challenges involved in unraveling the mysteries of something as complex as the universe and everything in it, and Kent Yagi, an associate professor with the University of Virginia’s College and Graduate School of Arts & Sciences is leading a research partnership between theoretical physicists and computational physicists at UVA that could offer new insight into the possibility of a theory of everything or, at least, a better understanding of gravity, one of the universe’s fundamental forces. The work has earned him a CAREER grant from the National Science Foundation, one of the most prestigious awards available to the nation’s most promising young researchers and educators.

A hot potato: ChatGPT, the chatbot that turned machine learning algorithms into a new gold rush for Wall Street speculators and Big Tech companies, is merely a “storefront” for large language models within the Generative Pre-trained Transformer (GPT) series. Developer OpenAI is now readying yet another upgrade for the technology.

OpenAI is busily working on GPT-5, the next generation of the company’s multimodal large language model that will replace the currently available GPT-4 model. Anonymous sources familiar with the matter told Business Insider that GPT-5 will launch by mid-2024, likely during summer.

OpenAI is developing GPT-5 with third-party organizations and recently showed a live demo of the technology geared to use cases and data sets specific to a particular company. The CEO of the unnamed firm was impressed by the demonstration, stating that GPT-5 is exceptionally good, even “materially better” than previous chatbot tech.

Because we live in a dystopian healthcare hell, AI chip manufacturer Nvidia has announced a partnership with an AI venture called Hippocratic AI to replace nurses with freaky AI “agents.”

These phony nursing robots cost hospitals and other health providers $9 an hour, a fee that barely falls above the US minimum hourly wage, and far below the average hourly wage for registered nurses (RNs.)

In a press release, Hippocratic AI described the disturbingly cheap nurses as part of an effort to mitigate staffing issues. The company also claims that the agents won’t be doing any diagnostic work, and will instead be doing “low-risk,” “patient-facing” tasks that can take place via video call.

This video explores the future of Mars colonization and terraforming from 2030 to 3000. Watch this next video about the 10 stages of AI: • The 10 Stages of Artificial Intelligence.
🎁 5 Free ChatGPT Prompts To Become a Superhuman: https://bit.ly/3Oka9FM
🤖 AI for Business Leaders (Udacity Program): https://bit.ly/3Qjxkmu.
☕ My Patreon: / futurebusinesstech.
➡️ Official Discord Server: / discord.

SOURCES:
https://scitechdaily.com/mars-settlem
https://www.news18.com/news/buzz/elon
https://2050.earth/predictions/a-sust
https://www.businessinsider.com/elon–
https://www.inverse.com/innovation/sp
https://www.inverse.com/article/54358
https://futurism.com/the-byte/elon-mu
https://www.lpi.usra.edu/V2050/presen
https://www.mars-one.com.
https://en.wikipedia.org/wiki/Coloniz
https://www.nationalgeographic.org/hi
https://www.spacex.com/human-spacefli
https://ntrs.nasa.gov/api/citations/2
https://www.space.com/how-feed-one-mi
https://www.usatoday.com/in-depth/new
https://futuretimeline.net/
https://eatlikeamartian.org/
• / realistically_speaking_when_do_you_think_w…
https://www.astronomy.com/space-explo

Official Discord Server: / discord.

💡 On this channel, I explain the following concepts: