Year 2021 đ
It only took the robotic arm 90 minutes of both virtual and physical training to learn to play the game.
âItâs not worse than a regular human player,â Zelle said. âItâs already on par with me.â
Year 2021 đ
It only took the robotic arm 90 minutes of both virtual and physical training to learn to play the game.
âItâs not worse than a regular human player,â Zelle said. âItâs already on par with me.â
Advances in AI mean robots could be doing your weekly shop by the 2030s, according to a new study â and this could help close the gender gap. Hereâs how.
Additionally, with the boom of artificial intelligence (AI) and advanced language learning models, Astroâs capabilities will only continue to improve in being able to solve increasingly challenging queries and requests. Amazon is investing billions of dollars into its SageMaker platform as a means to âBuild, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows.â Furthermore, the companyâs Bedrock platform enables the âdevelopment of generative AI applications using [foundational models] through an API, without managing infrastructure.â Undoubtedly, Amazon has the resources and technical prowess to truly make significant strides in generative AI and machine learning, and will increasingly do so in the coming years.
However, it is important to note that Astro is not the only gladiator in the arena. AI enthusiast and Tesla founder Elon Musk announced last year that Tesla is actively working on developing a humanoid robot named âOptimus.â The goal behind the project will be to âCreate a general purpose, bi-pedal, autonomous humanoid robot capable of performing unsafe, repetitive or boring tasks. Achieving that end goal requires building the software stacks that enable balance, navigation, perception and interaction with the physical world.â Musk has also ensured that the bot will be powered by Teslaâs advanced AI technology, meaning that it will be an intelligent and self-teaching bot that can respond to second-order queries and commands. Again, with enough time and testing, this technology can be leveraged in a positive way for healthcare-at-home needs and many more potential uses.
This is certainly an exciting and unprecedented time across multiple industries, including artificial intelligence, advanced robotics, and healthcare. The coming years will assuredly push the bounds of this technology and its applications. This advancement will undoubtedly bring with it certain challenges; however, if done correctly, it may also empower the means to benefit millions of people globally.
Posted in robotics/AI, supercomputing
In case anyone is wondering how advances like ChatGPT are possible while Mooreâs Law is dramatically slowing down, hereâs what is happening:
Nvidiaâs latest chip, the H100, can do 34 teraFLOPS of FP64 which is the standard 64-bit standard that supercomputers are ranked at. But this same chip can do 3,958 teraFLOPS of FP8 Tensor Core. FP8 is 8 times less precise than FP64. Also, Tensor Cores accelerate matrix operations, particularly matrix multiplication and accumulation, which are used extensively in deep learning calculations.
So by specializing in operations that AI cares about, the speed of the computer is increased by over 100 times!
A massive leap in accelerated compute.
Spotify ramps up policing after complaints of âartificial streaming.â
Spotify, the worldâs most popular music streaming subscription service, has reportedly pulled down tens and thousands of songs from its platform, which were uploaded by an AI company Boomy, which came under the suspicion of âartificial streaming.â
Spotify took down around 7% of the AI-generated tracks created by Boomy, whose users have, till date, created a total of 14,591,095 songs, which the company claims is 13.95% of the worldâs recorded music.
On Wednesday, Google unveiled the second generation of its Pathways Language Model (PaLM), called PaLM 2. The new large language model (LLM) will power the latest version of the companyâs ChatGPT-rivalling artificial intelligence (AI) chatbot, Bard, and Google has claimed to have significantly improved the capabilities of its latest AI model over its predecessor. The list of upgrades to PaLM is similar to the changes that OpenAI announced with the release of its latest LLM, Generative Pre-trained Transformer (GPT)-4, but with a few key differences.
What is Google PaLM 2?
In a blog post announcing the rollout, Zoubin Ghahramani, vice-president at Googleâs AI research division DeepMind, said that PaLM 2 is a âstate-of-the-art language model with improved multilingual, reasoning and coding capabilities.â
Artificial intelligence could potentially replace 80% of jobs âin the next few years,â according to AI expert Ben Goertzel.
Goertzel, the founder and chief executive officer of SingularityNET, told Franceâs AFP news agency at a summit in Brazil last week that a future like that could come to fruition with the introduction of systems like OpenAIâs ChatGPT.
âI donât think itâs a threat. I think itâs a benefit. People can find better things to do with their life than work for a living⊠Pretty much every job involving paperwork should be automatable,â he said.
And, imagine if every penny sank into this was available for AI research right now.
Meta sank tens of billions into its CEOâs virtual reality dream, but what will he do next?
An algorithm that allows more precise forecasts of the positions and velocities of a beamâs distribution of particles as it passes through an accelerator has been developed by researchers with the Department of Energy (DOE) and the University of Chicago.
Traveling at nearly light speed, the linear accelerator at the DOEâs SLAC National Accelerator Laboratory fires bursts of close to one billion electrons through long metallic pipes to generate its particle beam. Located in Menlo Park, California, the facility, originally called the Stanford Linear Accelerator Center, has used its 3.2-kilometer accelerator since its construction in 1962 to propel electrons to energies as great as 50 gigaelectronvolts (GeV).
The powerful particle beam generated by SLACâs linear accelerator is used in the study of everything from innovative materials to the behavior of molecules on the atomic scale, despite how the beam itself remains somewhat mysterious since researchers have a hard time gauging its appearance as it passes through an accelerator.