Menu

2023 Guardian Award

2023 Guardian Award Winners, Teacher and Student, Defend against AI Existential Risks

As we near the Singularity, more and more people are preparing for the upcoming existential risks, and therefore this year we again have two joint recipients of the Lifeboat Foundation Guardian Award, for the third time in our history. This year’s recipients are Geoffrey Hinton and his student Ilya Sutskever, who are both so worried about AI safety that they both have taken personal financial hits to increase humanity’s chances of surviving Artificial General Intelligence (AGI).
 
 
Geoffrey Hinton

Geoffrey Hinton

Geoffrey Hinton is a British-Canadian computer scientist and cognitive psychologist, most noted for his work on artificial neural networks. From 2013 to 2023, he divided his time working for Google (Google Brain) and the University of Toronto, before publicly announcing his departure from Google in May 2023, citing concerns about the risks of Artificial Intelligence (AI).
 
With David Rumelhart and Ronald J. Williams, Geoff was coauthor of a highly cited paper published in 1986 that popularized the backpropagation algorithm for training multi-layer neural networks. The dramatic image-recognition milestone of the AlexNet designed in collaboration with his students Alex Krizhevsky and Ilya Sutskever for the ImageNet challenge 2012 was a breakthrough in the field of computer vision.
 
Geoff received the 2018 Turing Award, often referred to as the “Nobel Prize of Computing”, together with Yoshua Bengio and Yann LeCun, for their work on deep learning. They are often referred to as the “Godfathers of Deep Learning”, and have continued to give public talks together.
 
In May 2023, Geoff announced his resignation from Google to be able to “freely speak out about the risks of AI.” He has voiced concerns about deliberate misuse by malicious actors, technological unemployment, and existential risk from artificial general intelligence. He had previously believed that Artificial General Intelligence (AGI) was “30 to 50 years or even longer away” and now feels that it will arrive much quicker than that.
 
Geoff expressed concerns about AI takeover, stating that “it’s not inconceivable” that AI could “wipe out humanity”. He states that AI systems capable of intelligent agency will be useful for military or economic purposes. He worries that generally intelligent AI systems could “create sub-goals” that are unaligned with their programmers’ interests.
 
Geoff states that AI systems may become power-seeking or prevent themselves from being shut off, not because programmers intended them to, but because those sub-goals are useful for achieving later goals. In particular, he says “we have to think hard about how to control” AI systems capable of self-improvement. In 2017, he called for an international ban on lethal autonomous weapons.
 
Geoff is the great-great-grandson of the mathematician and educator Mary Everest Boole and her husband, the logician George Boole, whose work eventually became one of the foundations of modern computer science.
 
 
Ilya Sutskever

Ilya Sutskever

Ilya Sutskever is a computer scientist working in machine learning. He is a cofounder and Chief Scientist at OpenAI.
 
Ilya has made several major contributions to the field of deep learning. In 2023, Ilya was one of the members of the OpenAI board who fired CEO Sam Altman with Ilya’s motivation rumored to be his commitment to AI safety. Sam returned a week later due to the power of everyone wanting to make big money from AI, and Ilya stepped down from the board. He is the co-inventor, with Alex Krizhevsky and Geoffrey Hinton, of AlexNet, a convolutional neural network. He is also one of the many coauthors of the AlphaGo paper.
 
Ilya earned his Ph.D. in Computer Science from the University of Toronto in 2013. His doctoral supervisor was Geoffrey Hinton.
 
From November to December 2012, Ilya spent about two months as a postdoc with Andrew Ng at Stanford University. He then returned to the University of Toronto and joined Geoffrey Hinton’s new research company DNNResearch, a spinoff of Hinton’s research group. Four months later, in March 2013, Google acquired DNNResearch and hired Ilya as a research scientist at Google Brain.
 
At the end of 2015, he left Google to become cofounder and chief scientist of the newly founded organization OpenAI. He was recruited to OpenAI by Guardian Award winner Elon Musk who has called Ilya “a good human — smart, good heart” and said that Ilya has a “good moral compass”.
 
In 2023, he announced that he will co-lead OpenAI’s new “Superalignment” project, which tries to solve the alignment of superintelligences in 4 years. He wrote that even if superintelligence seems far off, it could happen this decade. His Superalignment team believes it has devised a way to guide the behavior of AI models as they get ever smarter.
 
Ilya has written “Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.”
 
Ilya has committed the rest of his life to AI safety, following in the footsteps of his great teacher, Geoffrey Hinton.