Toggle light / dark theme

Tesla has finally decided to release its Autopilot safety data report after taking a break of more than a year.

For years, Tesla used to release a “Vehicle safety report” that tracked miles between accidents in its vehicles based on the level of Autopilot used or not used and compared it to the industry average.

The automaker used the report to claim that its Autopilot technology resulted in a much safer driving experience and that its vehicles would crash much less often than the average car in the US even without Autopilot.

Still, ChatGPT operates in a mostly siloed fashion. It can’t yet venture out “into the wild” to execute online tasks. For example, if you wanted to buy a milk frother on Amazon for under $100, ChatGPT might be able to recommend a product or two, and even provide links, but it can’t actually navigate Amazon and make the purchase.

Why? Besides obvious concerns, like letting a flawed AI model go on a shopping spree with your credit card, one challenge lies in training AI to successfully navigate graphical user interfaces (GUIs), like your laptop or smartphone screen.

But even the current version of GPT-4 seems to grasp the basic steps of online shopping. That’s the takeaway of a recent preprint paper in which AI researchers described how they successfully trained a GPT-4-based agent to “buy” products on Amazon. The agent, dubbed the MM-Navigator, did not actually purchase products, but it was able to analyze screenshots of an iOS smartphone screen and specify the appropriate action and where it should click, with impressive accuracy.

Apptronik, a NASA-backed robotics company, has unveiled Apollo, a humanoid robot that could revolutionize the workforce — because there’s virtually no limit to the number of jobs it can do.

“The focus for Apptronik is to build one robot that can do thousands of different things,” Jeff Cardenas, the company’s co-founder and CEO, told Freethink. “The best way to think of it is kind of like the iPhone of robots.”

The challenge: Robots have been automating repetitive tasks for decades — instead of having a person weld the same two car parts together 100 times a day, for example, an automaker might just add a welding robot to that segment of the assembly line.

The future of space-based UV/optical/IR astronomy requires ever larger telescopes. The highest priority astrophysics targets, including Earth-like exoplanets, first generation stars, and early galaxies, are all extremely faint, which presents an ongoing challenge for current missions and is the opportunity space for next generation telescopes: larger telescopes are the primary way to address this issue.

With mission costs depending strongly on aperture diameter, scaling current space telescope technologies to aperture sizes beyond 10 m does not appear economically viable. Without a breakthrough in scalable technologies for large telescopes, future advances in astrophysics may slow down or even completely stall. Thus, there is a need for cost-effective solutions to scale space telescopes to larger sizes.

The FLUTE project aims to overcome the limitations of current approaches by paving a path towards space observatories with large aperture, unsegmented liquid primary mirrors, suitable for a variety of astronomical applications. Such mirrors would be created in space via a novel approach based on fluidic shaping in microgravity, which has already been successfully demonstrated in a laboratory neutral buoyancy environment, in parabolic microgravity flights, and aboard the International Space Station (ISS).