Toggle light / dark theme

When Taylor Webb played around with GPT-3 in early 2022, he was blown away by what OpenAI’s large language model appeared to be able to do. Here was a neural network trained only to predict the next word in a block of text —a jumped-up autocomplete. And yet it gave correct answers to many of the abstract problems that Webb set for it—the kind of thing you’d find in an IQ test. “I was really shocked by its ability to solve these problems,” he says. “It completely upended everything I would have predicted.”

Webb is a psychologist at the University of California, Los Angeles, who studies the different ways people and computers solve abstract problems. He was used to building neural networks that had specific reasoning capabilities bolted on. But GPT-3 seemed to have learned them for free.

It’s the first Big Tech firm to publicly launch one, after a group of them pledged to develop them at the White House in July.

The tool, called SynthID, will initially be available only to users of Google’s AI image generator Imagen, which is hosted on Google Cloud’s machine learning platform Vertex. Users will be able to generate images using Imagen and then choose whether to add a watermark or not. The hope is that it could help people tell when AI-generated content is being passed off as real, or help protect copyright.

The new robotic device is designed to be mass-produced.

Soft robotics are all the rage with researchers coming up with new and improved developments all the time. There are soft robots that mimic muscles, soft robots that squeeze into tiny places, soft robots that are designed to function like seals and even soft robots that split into smaller units.

There is a good reason why scientists are determined to keep producing these devices. The gentle machines hold a better promise of adapting well with human populations but so far have been notoriously expensive to engineer which made them difficult to mass produce.

The humanoid machine can undertake all kinds of general-purpose tasks.

A new report by the BBC.


A new report by the BBC is quoting Geordie Rose, the chief executive of Sanctuary AI, a firm engineering a robot for household chores and general-purpose tasks, and the expert has stated that the development is less than 10 years away.

What if we could replace a time-consuming analysis, an important prerequisite to judge the right mix of isotopes to use?

Why can’t we find power the same way stars do— clean, renewable, and free of radioactive waste?

Humanity’s quest for clean and sustainable energy sources has reached a pivotal moment as researchers explore nuclear fusion. Unlike current nuclear fission plants that produce energy at the cost of radioactive waste, nuclear fusion offers the promise of virtually limitless and environmentally friendly power generation.

Woa, 😲, my Wave after Wave of AI controlled fighter aircraft idea. If you like that one you will love my mini UAV idea, i dont know if Ion drive or electric centrifuge weapons are up to it yet though, maybe.


The Times, citing congressional expectations, reported that the costs of the Air Force’s collaborative combat aircraft will be between $3 million and $25 million depending on their status as expendable, attritable, or exquisite. Even the higher-end figure is far less than a manned aircraft with a pilot, both of which are valuable to the force.

Air Force and Department of Defense representatives did not immediately respond to Insider’s request for comment. Kratos Defense, which makes the Valkyrie, would not comment on collaborative combat aircraft, citing the classified nature of the program.

While the Air Force’s next generation air dominance family of systems effort, which is focused on delivering air superiority through the development of a crewed next-generation fighter jet supported by uncrewed collaborative combat aircraft, has garnered widespread military support, human rights advocates are concerned that the unmanned war machines included in the plan pave the way to a “Terminator”-style dystopian future.

X’s new privacy policy, which is due to come into effect on September 29, states that the company “may use the information we collect and publicly available information to help train our machine learning or artificial intelligence models for the purposes outlined in this policy.” This policy is not included in its previous terms, which are still posted online.

Musk responded to a post about this change on X, saying that it would only use publicly available information to train the AI and would not use “DMs or anything private.”

During a live audio session on X – formerly Twitter – in July, Elon Musk said that his AI startup, xAI, would use public data from his social media platform to train its AI models. Insider reached out to X for comment but didn’t immediately hear back. It is not clear how it will use the information from X and which AI models this relates to.

In what can only bode poorly for our species’ survival during the inevitable robot uprisings, an AI system has once again outperformed the people who trained it. This time, researchers at the University of Zurich in partnership with Intel, pitted their “Swift” AI piloting system against a trio of world champion drone racers — none of whom could best its top time.

Swift is the culmination of years of AI and machine learning research by the University of Zurich. In 2021, the team set an earlier iteration of the flight control algorithm that used a series of external cameras to validate its position in space in real-time, against amateur human pilots, all of whom were easily overmatched in every lap of every race during the test. That result was a milestone in its own right as, previously, self-guided drones relied on simplified physics models to continually calculate their optimum trajectory, which severely lowered their top speed.

This week’s result is another milestone, not just because the AI bested people whose job is to fly drones fast, but because it did so without the cumbersome external camera arrays= of its predecessor. The Swift system “reacts in real time to the data collected by an onboard camera, like the one used by human racers,” an UZH Zurich release reads. It uses an integrated inertial measurement unit to track acceleration and speed while an onboard neural network localizes its position in space using data from the front-facing cameras. All of that data is fed into a central control unit — itself a deep neural network — which crunches through the numbers and devises a shortest/fastest path around the track.