Toggle light / dark theme

Neurotechnology will improve our lives in many ways. However, to sustain a world where our neurobiological data (in some cases perhaps including our innermost thoughts and feelings) remains properly secure, we must invest in both policy and technology that prevents bad actors from stealing private information or even directly manipulating people’s brains. We don’t want the very real possibility of ‘telepathy’ and ‘mind control’ to harm people and society. So, let’s start laying the groundwork now to ensure the best possible neurotech future! #neurotech #future #policy #neuroscience


We provide a Perspective highlighting the significant ethical implications of the use of fast-developing neurotechnologies in humans, as well as the regulatory frameworks and guidelines needed to protect neurodata and mental privacy.

Two types of technologies could change the privacy afforded in encrypted messages, and changes to this space could impact all of us.

On October 9, I moderated a panel on encryption, privacy policy, and human rights at the United Nations’s annual Internet Governance Forum. I shared the stage with some fabulous panelists including Roger Dingledine, the director of the Tor Project; Sharon Polsky, the president of the Privacy and Access Council of Canada; and Rand Hammoud, a campaigner at Access Now, a human rights advocacy organization. All strongly believe in and champion the protection of encryption.

I want to tell you about one thing that came up in our conversation: efforts to, in some way, monitor encrypted messages.

Policy proposals have been popping up around the world (like in Australia, India, and, most recently, the UK) that call for tech companies to build in ways to gain information about encrypted messages, including through back-door access. There have also been efforts to increase moderation and safety on encrypted messaging apps, like Signal and Telegram, to try to prevent the spread of abusive content, like child sexual abuse material, criminal networking, and drug trafficking.

Not surprisingly, advocates for encryption are generally opposed to these sorts of proposals as they weaken the level of user privacy that’s currently guaranteed by end-to-end encryption.

In my prep work before the panel, and then in our conversation, I learned about some new cryptographic technologies that might allow for some content moderation, as well as increased enforcement of platform policies and laws, all *without* breaking encryption. These are sort-of fringe technologies right now, mainly still in the research phase. Though they are being developed in several different flavors, most of these technologies ostensibly enable algorithms to evaluate messages or patterns in their metadata to flag problematic material without having to break encryption or reveal the content of the messages.

I want to tell you about one thing that came up in our conversation: efforts to, in some way, monitor encrypted messages.

Policy proposals have been popping up around the world (like in Australia, India, and, most recently, the UK) that call for tech companies to build in ways to gain information about encrypted messages, including through back-door access. There have also been efforts to increase moderation and safety on encrypted messaging apps, like Signal and Telegram, to try to prevent the spread of abusive content, like child sexual abuse material, criminal networking, and drug trafficking.

Not surprisingly, advocates for encryption are generally opposed to these sorts of proposals as they weaken the level of user privacy that’s currently guaranteed by end-to-end encryption.

In the six months since FLI published its open letter calling for a pause on giant AI experiments, we have seen overwhelming expert and public concern about the out-of-control AI arms race — but no slowdown. In this video, we call for U.S. lawmakers to step in, and explore the policy solutions necessary to steer this powerful technology to benefit humanity.

Artificial intelligence (AI) and emerging technologies have ushered in a new era, bringing unprecedented opportunities and challenges. In today’s rapidly evolving digital landscape, addressing these multifaceted challenges necessitates a collaborative effort spanning various sectors and calls for policy reforms while emphasizing global cooperation.

The rapid advancement of technologies, particularly artificial intelligence, has introduced transformative possibilities alongside a range of concerns. While AI holds the potential to revolutionize industries and enhance our daily lives, it also raises pressing issues related to data privacy, misinformation, and cybersecurity.

Experts have proposed adopting the “information environment” framework to address these multifaceted challenges. This framework comprises three essential components:

Tina Woods, serving as Healthy Longevity Champion for the National Innovation Center for Aging, sets forth her vision for a blueprint for healthy longevity for all. Her emphasis is on reaping the “longevity dividend” and achieving five additional years of healthy life expectancy while reducing health and wellbeing inequality. Woods elaborates on the role of emerging technologies like AI, machine learning, and advanced data analysis in comprehending and influencing biological systems related to aging. She also underscores the crucial role of lifestyle changes and the consideration of socio-economic factors in increasing lifespan. The talk also explores the burgeoning field of emotion AI and its application in developing environments for better health outcomes, with a mention of “Longevity Cities,” starting with a trial in Newcastle. In closing, Woods mentions the development of a framework for incentivizing businesses through measurement of their contribution to health in three areas: workforce health, consumer health through products and services, and community health. Woods envisions a future where businesses impacting health negatively are disincentivized, and concludes with the hope that the UK’s healthy longevity innovation mission can harness longevity science and data innovation to improve life expectancy.

00:00:00 — Introduction, National Innovation Center for Aging.
00:00:56 — Discussion on stagnating life expectancy and UK’s life sciences vision.
00:03:50 — Technological breakthroughs (including AI) in analyzing biological systems.
00:06:22 — Understanding what maintains health & wellbeing.
00:08:30 — Hype, hope, important of purpose.
00:10:00 — Psychological aging and “brain capital.“
00:13:15 — Ageism — a barrier to progress in the field of aging.
00:15:46 — Health data, AI and wearables.
00:18:44 — Prevention is key, Health is an asset to invest in.
00:19:13 — Longevity Cities.
00:21:19 — Business for Health and industry incentives.
00:23:13 — Closing.

About the Speaker:
Tina Woods is a social entrepreneur and system architect with a focus on health innovation at the intersection of science, technology, policy, and investment. She is the Founder and CEO of Collider Health and Business for Health, driving systemic change for better health through these platforms. She contributes to key UK health strategies and initiatives, like UKRI’s Healthy Ageing Industrial Strategy, and served as the Healthy Longevity Champion for the National Innovation Centre for Ageing. Woods has made significant contributions to AI in health and care, co-leading the Quantum Healthy Longevity Innovation Mission and authoring the book, “Live Longer with AI.” Previously, she served as the director of the All Party Parliamentary Group for Longevity secretariat. Woods is also the CEO & Founder of Collider Science, a social enterprise that encourages young people’s interest in science and technology. She holds a degree in genetics from Cornell University and an MBA from Bayes Business School in London.

FOLLOW US

Last week, Unity rolled out a new look version of its controversial Runtime Fee in the wake of a seismic backlash from developers who felt the original policy represented an egregious act of betrayal for a myriad of reasons.

While plenty of fury was aimed at how the fee might impact developers’ finances, some of that anger stemmed from Unity’s inability to effectively communicate its new policy and provide clear answers to pertinent questions.

The dust is now supposedly settled, but here’s one more thing: why doesn’t Unity’s explanation for its shifting answers about the Runtime Fee in relation to subscription services hold up to scrutiny?

MENLO PARK, California, Sept 28 (Reuters) — Meta Platforms (META.O) used public Facebook and Instagram posts to train parts of its new Meta AI virtual assistant, but excluded private posts shared only with family and friends in an effort to respect consumers’ privacy, the company’s top policy executive told Reuters in an interview.

Meta also did not use private chats on its messaging services as training data for the model and took steps to filter private details from public datasets used for training, said Meta President of Global Affairs Nick Clegg, speaking on the sidelines of the company’s annual Connect conference this week.

“We’ve tried to exclude datasets that have a heavy preponderance of personal information,” Clegg said, adding that the “vast majority” of the data used by Meta for training was publicly available.