Toggle light / dark theme

Access to freshwater is changing rapidly, with water stress affecting billions of people and countless businesses each year. Droughts and floods are becoming more frequent and severe, water pollution continues to rise and, without urgent action, we will soon reach a tipping point. This report outlines key pathways to strengthen water resilience, through private sector and multi-stakeholder action, and secure the future of water for society and the global economy.

Every industry depends on water. This makes water resilience not just an environmental concern, but a cornerstone of economic stability, business continuity and prosperity. Rising demand, driven by population growth, shifting consumption and the energy transition, is further straining resources. With an economic value estimated at $58 trillion, water’s critical importance and the scale of the challenge cannot be overstated.

No company or government can build water resilience alone. The World Economic Forum’s Water Futures Community brings together public and private sector sectors leaders to accelerate investment and action. In collaboration with McKinsey & Company, this report offers a systems approach for our community of partners to strengthen water resilience and highlights opportunities for collective action to accelerate solutions at scale.

To understand exactly what’s going on, we need to back up a bit. Roughly put, building a machine-learning model involves training it on a large number of examples and then testing it on a bunch of similar examples that it has not yet seen. When the model passes the test, you’re done.

What the Google researchers point out is that this bar is too low. The training process can produce many different models that all pass the test but—and this is the crucial part—these models will differ in small, arbitrary ways, depending on things like the random values given to the nodes in a neural network before training starts, the way training data is selected or represented, the number of training runs, and so on. These small, often random, differences are typically overlooked if they don’t affect how a model does on the test. But it turns out they can lead to huge variation in performance in the real world.

In other words, the process used to build most machine-learning models today cannot tell which models will work in the real world and which ones won’t.

Mark Rober’s Tesla crash story and video on self-driving cars face significant scrutiny for authenticity, bias, and misleading claims, raising doubts about his testing methods and the reliability of the technology he promotes.

Questions to inspire discussion.

Tesla Autopilot and Testing 🚗 Q: What was the main criticism of Mark Rober’s Tesla crash video? A: The video was criticized for failing to use full self-driving mode despite it being shown in the thumbnail and capable of being activated the same way as autopilot. 🔍 Q: How did Mark Rober respond to the criticism about not using full self-driving mode? A: Mark claimed it was a distinction without a difference and was confident the results would be the same if he reran the experiment in full self-driving mode. 🛑 Q: What might have caused the autopilot to disengage during the test?