Toggle light / dark theme

Grey Swans on the Horizon; AI, Cyber, Pandemics, and ET Scenarios

Back in 2007, statistician Nassim Nicholas Taleb described a “Black Swan” as an occurrence that “is an outlier,” meaning it deviates from accepted wisdom. Accordingly, black swans are unanticipated, and uncommon, and can result from geopolitical, economic, or other unanticipated occurrences.

Because of major advances in computing, we can now anticipate, and, with applied risk management, help contain what was described as Black Swan events. So, in effect, with predictive analytical capabilities enabled by artificial intelligence, most Black Swans have now morphed into what is now termed Grey Swan events.

An industry leader in the insurance sector, Aon, defines Black Swan events as unexpected, unanticipated shocks. They depict unexpected but predicted surprises that are known as “Grey Swan events.” Similar to Black Swans, they can have a profound effect.

Can we control superintelligence? With Yoshua Bengio #artificialinteligence

Azeem speaks with Professor Yoshua Bengio. In 2018, Yoshua, Geoff Hinton and Yann LeCun were awarded the Turing Award for advancing the field of AI, in particular for their groundbreaking conceptual and engineering research in deep learning. This earnt them the moniker the Three Musketeers of Deep Learning. I think Bengio might be Aramis: intellectual, somewhat pensive, with aspirations beyond combat, and yet skilled with the blade.

With 750,000 citations to his scientific research, Yoshua has turned to the humanistic dimension of AI, in particular, the questions of safety, democracy, and climate change. Yoshua and I sit on the OECD’s Expert Group on AI Futures.

NASA’s new balloon-borne telescope was designed with AI

NASA is using an AI-powered technique called “generative design” to dramatically speed up the process of designing hardware for upcoming missions — and one of the first AI-designed missions will use a massive balloon to lift a telescope to the stratosphere.

The need: Weight is one of the most important considerations when NASA engineers are designing parts for new spacecraft — the heavier the final object is, the more fuel will be needed to launch it, and the more expensive the mission will be.

However, engineers can’t sacrifice strength in the name of keeping weight down — if a part breaks once it’s in space, replacing or repairing it typically isn’t an option.

Bill Gates isn’t too scared about AI

Like Gates, Leslie doesn’t dismiss doomer scenarios outright. “Bad actors can take advantage of these technologies and cause catastrophic harms,” he says. “You don’t need to buy into superintelligence, apocalyptic robots, or AGI speculation to understand that.”

“But I agree that our immediate concerns should be in addressing the existing risks that derive from the rapid commercialization of generative AI,” says Leslie. “It serves a positive purpose to sort of zoom our lens in and say, ‘Okay, well, what are the immediate concerns?’”

In his post, Gates notes that AI is already a threat in many fundamental areas of society, from elections to education to employment. Of course, such concerns aren’t news. What Gates wants to tell us is that although these threats are serious, we’ve got this: “The best reason to believe that we can manage the risks is that we have done it before.”

Meta plans launch of new AI language model Llama 3 in July, The Information reports

Meta researchers are trying to “loosen up” the model so that it could at least provide context to a query it deems controversial.


Meta Platforms is planning to release the newest version of its artificial-intelligence large language model Llama 3 in July which would give better responses to contentious questions posed by users, The Information reported on Wednesday.

/* */