Elon Musk praises the Tesla’s Full Self-Driving system for its role in aiding a Tesla owner during a mild heart attack.
Category: robotics/AI – Page 355
Accuracy of gene regulatory network inference is increased by combining multiome single-cell and atlas-scale bulk data.
Amazon.com Inc. is rapidly advancing its use of robotics, deploying over 750,000 robots to work alongside its employees.
The world’s second-largest private employer employs 1.5 million people. While that’s a lot, it’s a decrease of over 100,000 employees from the 1.6 million workers it had in 2021. Meanwhile, the company had 520,000 robots in 2022 and 200,000 robots in 2019. While Amazon is bringing on hundreds of thousands of robots per year, the company is slowly decreasing its employee numbers.
The robots, including new models like Sequoia and Digit, are designed to perform repetitive tasks, thereby improving efficiency, safety and delivery speed for Amazon’s customers. Sequoia, for example, speeds up inventory management and order processing in fulfillment centers, while Digit, a bipedal robot developed in collaboration with Agility Robotics, handles tasks like moving empty tote boxes.
The unstoppable advance of AI is changing the rules, creating new winners and losers in the increasingly important semiconductor sector. Here is a review of the battles being waged in the supply chain and the main players fighting for dominance.
After months of leaks, OpenAI has apparently fired two researchers who are said to be linked to company secrets going public.
As The Information reports based on insider knowledge, the firm has fired researchers Leopold Aschenbrenner — said to be a close ally of embattled cofounder Ilya Sutskever — and Pavel Izmailov.
Both men had worked on OpenAI’s safety team, per the report, though most recently, Aschenbrenner had been working on its so-called “superalignment” efforts while Izmailov was tasked with researching AI reasoning. We’ve reached out to OpenAI and Microsoft, its major investor, for further context.
How Do Machines ‘Grok’ Data?
Posted in robotics/AI
By apparently overtraining them, researchers have seen neural networks discover novel solutions to problems.
Tesla has revealed some of its humanoid robot technology through filings for several new patents related to its Optimus robot program.
A few months ago, Tesla unveiled “Optimus Gen 2”, a new generation of its humanoid robot that should be able to take over repetitive tasks from humans.
The new prototype showed a lot of improvements compared to previously underwhelming versions of the robot, and it gave some credibility to the project.
A new study suggests that advanced AI models are pretty good at acting dumber than they are — which might have massive implications as they continue to get smarter.
Published in the journal PLOS One, researchers from Berlin’s Humboldt University found that when testing out a large language model (LLM) on so-called “theory of mind” criteria, they found that not only can AI mimic the language learning stages exhibited in children, but seem to express something akin to the mental capabilities related to those stages as well.
In an interview with PsyPost, Humboldt University research assistant and main study author Anna Maklová, who also happens to be a psycholinguistics expert, explained how her field of study relates to the fascinating finding.
A UC San Diego team has uncovered a method to decipher neural networks’ learning process, using a statistical formula to clarify how features are learned, a breakthrough that promises more understandable and efficient AI systems. Credit: SciTechDaily.com.
Neural networks have been powering breakthroughs in artificial intelligence, including the large language models that are now being used in a wide range of applications, from finance, to human resources to healthcare. But these networks remain a black box whose inner workings engineers and scientists struggle to understand. Now, a team led by data and computer scientists at the University of California San Diego has given neural networks the equivalent of an X-ray to uncover how they actually learn.
The researchers found that a formula used in statistical analysis provides a streamlined mathematical description of how neural networks, such as GPT-2, a precursor to ChatGPT, learn relevant patterns in data, known as features. This formula also explains how neural networks use these relevant patterns to make predictions.