One employee uses it to automate his weekly parking requests at OpenAI’s San Francisco office.

Questions to inspire discussion.
🤝 Q: What are the potential issues with the Uber-Lucid-Neuro robotaxi partnership? A: The partnership is a “cluster f waiting to happen” due to independent entities involved, which typically end in a “messy divorce”, making it potentially uncompetitive against fully integrated solutions like Tesla’s.
🗺️ Q: How does Tesla’s robotaxi service area expansion compare to Waymo’s? A: Tesla expanded its service area in 22 days, while Waymo’s first service area expansion in Austin, Texas took 4 months and 13 days, demonstrating Tesla’s faster and more aggressive approach to expansion.
Business Viability.
💼 Q: What concerns exist about the Uber-Lucid-Neuro robotaxi partnership’s business case? A: While considered a “breakout moment” for autonomous vehicles, the business case and return on investment for the service remain unclear, according to former Ford CEO Mark Fields.
🏭 Q: What manufacturing advantage does Tesla have in the robotaxi market? A: Tesla’s fully vertically integrated approach and ability to mass-manufacture Cyber Cabs at a scale of tens of thousands per month gives it a significant cost-per-mile advantage over competitors using more expensive, non-specialized vehicles. ## Key Insights.
This framework is made up of two key components. The first is a deep-learning model that essentially allows the robot to determine where it and its appendages are in 3-dimensional space. This allows it to predict how its position will change as specific movement commands are executed. The second is a machine-learning program that translates generic movement commands into code a robot can understand and execute.
The team tested the new training and control paradigm by benchmarking its effectiveness against traditional camera-based control methods. The Jacobian field solution surpassed those existing 2D control systems in accuracy — especially when the team introduced visual occlusion that caused the older methods to enter a fail state. Machines using the team’s method, however, successfully created navigable 3D maps even when scenes were partially occluded with random clutter.
Once the scientists developed the framework, it was then applied to various robots with widely varying architectures. The end result was a control program that requires no further human intervention to train and operate robots using only a single video camera.
Researchers have developed a technique that significantly improves the performance of large language models without increasing the computational power necessary to fine-tune the models. The researchers demonstrated that their technique improves the performance of these models over previous techniques in tasks including commonsense reasoning, arithmetic reasoning, instruction following, code generation, and visual recognition.
Large language models are artificial intelligence systems that are pretrained on huge data sets. After pretraining, these models predict which words should follow each other in order to respond to user queries. However, the nonspecific nature of pretraining means that there is ample room for improvement with these models when the user queries are focused on specific topics, such as when a user requests the model to answer a math question or to write computer code.
“In order to improve a model’s ability to perform more specific tasks, you need to fine-tune the model,” says Tianfu Wu, co-corresponding author of a paper on the work and an associate professor of computer engineering at North Carolina State University.
Artificial intelligence is advancing at a dizzying speed. Like many new technologies, it offers significant benefits but also poses safety risks. Recognizing the potential dangers, leading researchers from Google DeepMind, OpenAI, Meta, Anthropic and a coalition of companies and nonprofit groups have come together to call for more to be done to monitor how AI systems “think.”
While roboticists have introduced increasingly advanced systems over the past decades, most existing robots are not yet able to manipulate objects with the same dexterity and sensing ability as humans. This, in turn, adversely impacts their performance in various real-world tasks, ranging from household chores to the clearing of rubble after natural disasters and the assembly or performing maintenance tasks, particularly in high-temperature working environments such as steel mills and foundries, where elevated temperatures can significantly degrade performance and compromise the precision required for safe operations.
Researchers at the University of Southern California recently developed the MOTIF (Multimodal Observation with Thermal, Inertial, and Force sensors) hand, a new robotic hand that could improve the object manipulation capabilities of humanoid robots. The innovation, presented in a paper posted to the arXiv preprint server, features a combination of sensing devices, including tactile sensors, a depth sensor, a thermal camera, inertial measurement unit (IMU) sensors and a visual sensor.
“Our paper emerged from the need to advance robotic manipulation beyond traditional visual and tactile sensing,” Daniel Seita, Hanyang Zhou, Wenhao Liu, and Haozhe Lou told Tech Xplore. “Current multi-fingered robotic hands often lack the integrated sensing capabilities necessary for complex tasks involving thermal awareness and responsive contact feedback.”
Imagine a future where artificial intelligence quietly shoulders the drudgery of software development: refactoring tangled code, migrating legacy systems, and hunting down race conditions, so that human engineers can devote themselves to architecture, design, and the genuinely novel problems still beyond a machine’s reach.
Recent advances appear to have nudged that future tantalizingly close, but a new paper by researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and several collaborating institutions argues that this potential future reality demands a hard look at present-day challenges.
Titled “Challenges and Paths Towards AI for Software Engineering,” the work maps the many software-engineering tasks beyond code generation, identifies current bottlenecks, and highlights research directions to overcome them, aiming to let humans focus on high-level design while routine work is automated. The paper is available on the arXiv preprint server, and the researchers are presenting their work at the International Conference on Machine Learning (ICML 2025) in Vancouver.
The Graduate School of Information Science (GSIS) at Tohoku University, together with the Physics and Informatics (PHI) Lab at NTT Research, Inc., have jointly published a paper in the journal Quantum Science and Technology. The study involved studying a combinatorial clustering problem, a representative task in unsupervised machine learning.
Together, the two institutions are researching methods to bring to life a large-scale CIM simulation platform using conventional high-performance computing (HPC). This large-scale CIM will be critical to enabling cyber CIMs that will be widely accessible for solving hard NP, NP-complete and NP-hard problems.
The collaboration kicked off in 2023 with Hiroaki Kobayashi, Professor at the GSIS at Tohoku University, acting as the principal investigator for the joint research agreement (JRA), with PHI Lab Director Yoshihisa Yamamoto joining as the NTT Research counterpart to Kobayashi.
A research team affiliated with UNIST announced the successful development of a novel semiconductor device that uses a new class of materials, known as altermagnetism. This breakthrough is expected to significantly advance the development of ultra-fast, energy-efficient AI semiconductor chips.
Jointly led by Professor Jung-Woo Yoo from the Department of Materials Science and Engineering and Professor Changhee Sohn from the Department of Physics at UNIST, the team succeeded in fabricating magnetic tunnel junctions (MTJs) using altermagnetic ruthenium oxide (RuO2). They also measured a practical level of tunneling magnetoresistance (TMR) in these devices, demonstrating their potential for spintronic applications.
The research was led by Seunghyun Noh from the Department of Materials Science and Engineering and Kyuhyun Kim from the Department of Physics at UNIST. The findings were published in Physical Review Letters on June 20, 2025.
Humanoid robots — long seen as futuristic — are already here, walking, talking, and working among us. Here are 10 advanced examples.