Toggle light / dark theme

Scientists Reveal: We’re Nearly Living in a Simulation. AI Can Help Us Hack and Escape

The potential pathways through which AI could help us escape a simulated reality are both fascinating and complex. One approach could involve AI discovering and manipulating the underlying algorithms that govern the simulation. By understanding these algorithms, AI could theoretically alter the simulation’s parameters or even create a bridge to the “real” world outside the simulation.

Another approach involves using AI to enhance our cognitive and perceptual abilities, enabling us to detect inconsistencies or anomalies within the simulation. These anomalies, often referred to as “glitches,” could serve as clues pointing to the artificial nature of our reality. For instance, moments of déjà vu or inexplicable phenomena might be more than just quirks of human perception—they could be signs of the simulation’s imperfections.

While the idea of escaping a simulation is intriguing, it also raises profound ethical and existential questions. For one, if we were to confirm that we are indeed living in a simulation, what would that mean for our understanding of free will, identity, and the meaning of life? Moreover, the act of escaping the simulation could have unforeseen consequences. If the simulation is designed to sustain and nurture human life, breaking free from it might expose us to a harsher and more dangerous reality.

EA Looking to Use AI to Take User-Generated Content to the Next Level

I expect this around 2029/2030, so about 5-ish years. Phase 1 of it will be: hey Ai, i didnt really like that level, mission, story line, etc… edits on the fly. Phase 2 of it will be creating DLC on the fly. And, Phase 3 will be just telling an AI roughly what you want to play, and it tries to build it.


Publishing giant Electronic Arts shows a concept of the different ways users could generate their own content in a game using generative AI.

Autonomous robot replaces human fusion reactor inspectors in world-first trial

What just happened? Researchers have successfully deployed a fully autonomous robot to inspect the inside of a nuclear fusion reactor. This achievement – the first of its kind – took place over 35 days as part of trials at the UK Atomic Energy Authority’s Joint European Torus facility.

JET was one of the world’s largest and most powerful operational fusion reactors until it was recently shut down. Meanwhile, the robotic star of the show was, of course, the four-legged Spot robot from Boston Dynamics, souped up with “localization and mission autonomy solutions” from the Oxford Robotics Institute (ORI) and “inspection payload” from UKAEA.

Spot roamed JET’s environment twice daily, using sensors to map the facility layout, monitor conditions, steer around obstacles and personnel, and collect vital data. These inspection duties normally require human operators to control the robot remotely.

This robotic knee exoskeleton is made from consumer braces and drone motors

Robotic exoskeletons are an increasingly popular method for assisting human labor in the workplace. Those that specifically support the back, however, can result in bad lifting form by the wearer. To combat this, researchers at the University of Michigan have built a pair of robot knee exoskeletons, using commercially available drone motors and knee braces.

“Rather than directly bracing the back and giving up on proper lifting form,” U-M professor Robert Gregg notes, “we strengthen the legs to maintain it.”

Test subjects were required to move a 30-pound kettlebell up and down a flight of stairs. Researchers note that the tech helped them maintain good lifting form, while lifting more quickly.

AI ‘early warning’ system shows promise in preventing hospital deaths, study says

The study, published Monday in the Canadian Medical Association Journal, found a 26 per cent reduction in non-palliative deaths among patients in St. Michael’s Hospital’s general internal medicine unit when the AI tool was used.

“We’ve seen that there is a lot of hype and excitement around artificial intelligence in medicine. We’ve also seen not as much actual deployment of these tools in real clinical environments,” said lead author Dr. Amol Verma, a general internal medicine specialist and scientist at the hospital in Toronto.

Can AI Scaling Continue Through 2030?

Our final estimate of the achievable inter data center bandwidth by 2030 is 4 to 20 Pbps, which would allow for training runs of 3e29 to 2e31 FLOP. In light of this, bandwidth is unlikely to be a major constraint for a distributed training run compared to achieving the necessary power supply in the first place.

Expanding bandwidth capacity for distributed training networks presents a relatively straightforward engineering challenge, achievable through the deployment of additional fiber pairs between data centers. In the context of AI training runs potentially costing hundreds of billions of dollars, the financial investment required for such bandwidth expansion appears comparatively modest.44

We conclude that training runs in 2030 supported by a local power supply could likely involve 1 to 5 GW and reach 1e28 to 3e29 FLOP by 2030. Meanwhile, geographically distributed training runs could amass a supply of 2 to 45 GW and achieve 4 to 20 Pbps connections between data center pairs, allowing for training runs of 2e28 to 2e30 FLOP.45 All in all, it seems likely that training runs between 2e28 to 2e30 FLOP will be possible by 2030.46 The assumptions behind these estimates can be found in Figure 3 below.

/* */