Toggle light / dark theme

For the most part, we treat electric aviation like it’s something that we’ll see in the future. I mean, batteries are expensive and heavy, and they don’t hold that much energy per unit of weight. So, compared to, say, kerosene (jet fuel), batteries take up a lot more space and weight capacity in a plane design. This means either really poor range or carrying around nothing but batteries (which isn’t very useful).

But that’s only true for the largest of planes. The smaller the plane, the easier it has been for companies to electrify or even go full electric with it it. Once you get down to unmanned planes and helicopters that carry something like a small sensor payload (cameras, etc.), you’re in a realm where all-electric aviation has been around for over a decade.

Though, small unmanned systems like quadcopters tend to only fly for 30–45 minutes at most, while small fixed-wing remote piloted airplanes tend to fly for maybe 1–2 hours. What if you want to fly for a number of hours or even days to cover more ground? It turns out that there are some answers, and the usually involve solar.

Google is testing a new API that uses machine learning models to offer real-time language translation for inputted text and to make it easier to translate web pages.

According to a proposal spotted by Bleeping Computer, the feature is being developed by Chrome’s built-in AI team and is aimed at exposing the web browser’s built-in translation functionality and the ability to download additional language models to translate text.

While Chrome and Edge already have built-in translation features, they can sometimes have issues translating web pages that have dynamic or complex content. For example, Chrome may not be able to translate all sections of an interactive website correctly.

The potential pathways through which AI could help us escape a simulated reality are both fascinating and complex. One approach could involve AI discovering and manipulating the underlying algorithms that govern the simulation. By understanding these algorithms, AI could theoretically alter the simulation’s parameters or even create a bridge to the “real” world outside the simulation.

Another approach involves using AI to enhance our cognitive and perceptual abilities, enabling us to detect inconsistencies or anomalies within the simulation. These anomalies, often referred to as “glitches,” could serve as clues pointing to the artificial nature of our reality. For instance, moments of déjà vu or inexplicable phenomena might be more than just quirks of human perception—they could be signs of the simulation’s imperfections.

While the idea of escaping a simulation is intriguing, it also raises profound ethical and existential questions. For one, if we were to confirm that we are indeed living in a simulation, what would that mean for our understanding of free will, identity, and the meaning of life? Moreover, the act of escaping the simulation could have unforeseen consequences. If the simulation is designed to sustain and nurture human life, breaking free from it might expose us to a harsher and more dangerous reality.

I expect this around 2029/2030, so about 5-ish years. Phase 1 of it will be: hey Ai, i didnt really like that level, mission, story line, etc… edits on the fly. Phase 2 of it will be creating DLC on the fly. And, Phase 3 will be just telling an AI roughly what you want to play, and it tries to build it.


Publishing giant Electronic Arts shows a concept of the different ways users could generate their own content in a game using generative AI.

What just happened? Researchers have successfully deployed a fully autonomous robot to inspect the inside of a nuclear fusion reactor. This achievement – the first of its kind – took place over 35 days as part of trials at the UK Atomic Energy Authority’s Joint European Torus facility.

JET was one of the world’s largest and most powerful operational fusion reactors until it was recently shut down. Meanwhile, the robotic star of the show was, of course, the four-legged Spot robot from Boston Dynamics, souped up with “localization and mission autonomy solutions” from the Oxford Robotics Institute (ORI) and “inspection payload” from UKAEA.

Spot roamed JET’s environment twice daily, using sensors to map the facility layout, monitor conditions, steer around obstacles and personnel, and collect vital data. These inspection duties normally require human operators to control the robot remotely.

Robotic exoskeletons are an increasingly popular method for assisting human labor in the workplace. Those that specifically support the back, however, can result in bad lifting form by the wearer. To combat this, researchers at the University of Michigan have built a pair of robot knee exoskeletons, using commercially available drone motors and knee braces.

“Rather than directly bracing the back and giving up on proper lifting form,” U-M professor Robert Gregg notes, “we strengthen the legs to maintain it.”

Test subjects were required to move a 30-pound kettlebell up and down a flight of stairs. Researchers note that the tech helped them maintain good lifting form, while lifting more quickly.

The study, published Monday in the Canadian Medical Association Journal, found a 26 per cent reduction in non-palliative deaths among patients in St. Michael’s Hospital’s general internal medicine unit when the AI tool was used.

“We’ve seen that there is a lot of hype and excitement around artificial intelligence in medicine. We’ve also seen not as much actual deployment of these tools in real clinical environments,” said lead author Dr. Amol Verma, a general internal medicine specialist and scientist at the hospital in Toronto.

Our final estimate of the achievable inter data center bandwidth by 2030 is 4 to 20 Pbps, which would allow for training runs of 3e29 to 2e31 FLOP. In light of this, bandwidth is unlikely to be a major constraint for a distributed training run compared to achieving the necessary power supply in the first place.

Expanding bandwidth capacity for distributed training networks presents a relatively straightforward engineering challenge, achievable through the deployment of additional fiber pairs between data centers. In the context of AI training runs potentially costing hundreds of billions of dollars, the financial investment required for such bandwidth expansion appears comparatively modest.44

We conclude that training runs in 2030 supported by a local power supply could likely involve 1 to 5 GW and reach 1e28 to 3e29 FLOP by 2030. Meanwhile, geographically distributed training runs could amass a supply of 2 to 45 GW and achieve 4 to 20 Pbps connections between data center pairs, allowing for training runs of 2e28 to 2e30 FLOP.45 All in all, it seems likely that training runs between 2e28 to 2e30 FLOP will be possible by 2030.46 The assumptions behind these estimates can be found in Figure 3 below.