Toggle light / dark theme

The convergence of Biotechnology, Neurotechnology, and Artificial Intelligence has major implications for the future of humanity. This talk explores the long-term opportunities inherent to these fields by surveying emerging breakthroughs and their potential applications. Whether we can enjoy the benefits of these technologies depends on us: Can we overcome the institutional challenges that are slowing down progress without exacerbating civilizational risks that come along with powerful technological progress?

About the speaker: Allison Duettmann is the president and CEO of Foresight Institute. She directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, Fellowships, Prizes, and Tech Trees, and shares this work with the public. She founded Existentialhope.com, co-edited Superintelligence: Coordination & Strategy, co-authored Gaming the Future, and co-initiated The Longevity Prize. She advises companies and projects, such as Cosmica, and The Roots of Progress Fellowship, and is on the Executive Committee of the Biomarker Consortium. She holds an MS in Philosophy & Public Policy from the London School of Economics, focusing on AI Safety.

Colorectal cancer screening is widely recommended for adults ages 45 to 75 with an average risk of developing the disease. However, many people don’t realize that the benefits of screening for this type of cancer aren’t always the same for older adults.

“While many clinicians simply follow guideline recommendations for colon screening in adults within this age range, this isn’t always the best approach,” said Sameer Saini, M.D., M.S., who is a gastroenterologist at both Michigan Medicine and the Lieutenant Colonel Charles S. Kettles VA Medical Center and is as a health services researcher at the University of Michigan Institute for Healthcare Policy and Innovation and the Ann Arbor VA Center for Clinical Management Research, or CCMR.

“As individuals get older, they often acquire health problems that can lead to potential harm when coupled with endoscopy. While guidelines recommend a personalized approach to screening in average risk individuals between ages 76 and 85, there are no such recommendations for older adults who are younger than age 76—individuals who we commonly see in our clinics.”

The goal of the order, according to the White House, is to improve “AI safety and security.” It also includes a requirement that developers share safety test results for new AI models with the US government if the tests show that the technology could pose a risk to national security. This is a surprising move that invokes the Defense Production Act, typically used during times of national emergency.

The executive order advances the voluntary requirements for AI policy that the White House set back in August, though it lacks specifics on how the rules will be enforced. Executive orders are also vulnerable to being overturned at any time by a future president, and they lack the legitimacy of congressional legislation on AI, which looks unlikely in the short term.

“The Congress is deeply polarized and even dysfunctional to the extent that it is very unlikely to produce any meaningful AI legislation in the near future,” says Anu Bradford, a law professor at Columbia University who specializes in digital regulation.

Visit http://TED.com to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more.

Right now, billions of neurons in your brain are working together to generate a conscious experience — and not just any conscious experience, your experience of the world around you and of yourself within it. How does this happen? According to neuroscientist Anil Seth, we’re all hallucinating all the time; when we agree about our hallucinations, we call it “reality.” Join Seth for a delightfully disorienting talk that may leave you questioning the very nature of your existence.

The TED Talks channel features the best talks and performances from the TED Conference, where the world’s leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design — plus science, business, global issues, the arts and more. You’re welcome to link to or embed these videos, forward them to others and share these ideas with people you know.

Follow TED on Twitter: http://twitter.com/TEDTalks.

Neurotechnology will improve our lives in many ways. However, to sustain a world where our neurobiological data (in some cases perhaps including our innermost thoughts and feelings) remains properly secure, we must invest in both policy and technology that prevents bad actors from stealing private information or even directly manipulating people’s brains. We don’t want the very real possibility of ‘telepathy’ and ‘mind control’ to harm people and society. So, let’s start laying the groundwork now to ensure the best possible neurotech future! #neurotech #future #policy #neuroscience


We provide a Perspective highlighting the significant ethical implications of the use of fast-developing neurotechnologies in humans, as well as the regulatory frameworks and guidelines needed to protect neurodata and mental privacy.

Two types of technologies could change the privacy afforded in encrypted messages, and changes to this space could impact all of us.

On October 9, I moderated a panel on encryption, privacy policy, and human rights at the United Nations’s annual Internet Governance Forum. I shared the stage with some fabulous panelists including Roger Dingledine, the director of the Tor Project; Sharon Polsky, the president of the Privacy and Access Council of Canada; and Rand Hammoud, a campaigner at Access Now, a human rights advocacy organization. All strongly believe in and champion the protection of encryption.

I want to tell you about one thing that came up in our conversation: efforts to, in some way, monitor encrypted messages.

Policy proposals have been popping up around the world (like in Australia, India, and, most recently, the UK) that call for tech companies to build in ways to gain information about encrypted messages, including through back-door access. There have also been efforts to increase moderation and safety on encrypted messaging apps, like Signal and Telegram, to try to prevent the spread of abusive content, like child sexual abuse material, criminal networking, and drug trafficking.

Not surprisingly, advocates for encryption are generally opposed to these sorts of proposals as they weaken the level of user privacy that’s currently guaranteed by end-to-end encryption.

In my prep work before the panel, and then in our conversation, I learned about some new cryptographic technologies that might allow for some content moderation, as well as increased enforcement of platform policies and laws, all *without* breaking encryption. These are sort-of fringe technologies right now, mainly still in the research phase. Though they are being developed in several different flavors, most of these technologies ostensibly enable algorithms to evaluate messages or patterns in their metadata to flag problematic material without having to break encryption or reveal the content of the messages.

I want to tell you about one thing that came up in our conversation: efforts to, in some way, monitor encrypted messages.

Policy proposals have been popping up around the world (like in Australia, India, and, most recently, the UK) that call for tech companies to build in ways to gain information about encrypted messages, including through back-door access. There have also been efforts to increase moderation and safety on encrypted messaging apps, like Signal and Telegram, to try to prevent the spread of abusive content, like child sexual abuse material, criminal networking, and drug trafficking.

Not surprisingly, advocates for encryption are generally opposed to these sorts of proposals as they weaken the level of user privacy that’s currently guaranteed by end-to-end encryption.

In the six months since FLI published its open letter calling for a pause on giant AI experiments, we have seen overwhelming expert and public concern about the out-of-control AI arms race — but no slowdown. In this video, we call for U.S. lawmakers to step in, and explore the policy solutions necessary to steer this powerful technology to benefit humanity.