Toggle light / dark theme

The SRI President Bernard Foing and the SRI CEO and Founder A. V. Autino are in agreement on the text of this newsletter, but not on the title(!). We decided therefore to issue it with two titles. The first one, by A.V. Autino, establishes an ideological distance from the governance model that brought the civilization to the current situation, refusing any direct co-responsibility. The title proposed by B. Foing implies that “we” (the global society) are responsible for the general failure since we voted for the current leaders. He also suggested that should “we” (space humanists) be governing, he’s not sure that we would be able to do better than current leaders, for peace and development. Better than warmongers for sure! Replied Autino. However, both titles are true and have their reasons. That’s why we don’t want to choose one…

Two types of technologies could change the privacy afforded in encrypted messages, and changes to this space could impact all of us.

On October 9, I moderated a panel on encryption, privacy policy, and human rights at the United Nations’s annual Internet Governance Forum. I shared the stage with some fabulous panelists including Roger Dingledine, the director of the Tor Project; Sharon Polsky, the president of the Privacy and Access Council of Canada; and Rand Hammoud, a campaigner at Access Now, a human rights advocacy organization. All strongly believe in and champion the protection of encryption.

I want to tell you about one thing that came up in our conversation: efforts to, in some way, monitor encrypted messages.

Policy proposals have been popping up around the world (like in Australia, India, and, most recently, the UK) that call for tech companies to build in ways to gain information about encrypted messages, including through back-door access. There have also been efforts to increase moderation and safety on encrypted messaging apps, like Signal and Telegram, to try to prevent the spread of abusive content, like child sexual abuse material, criminal networking, and drug trafficking.

Not surprisingly, advocates for encryption are generally opposed to these sorts of proposals as they weaken the level of user privacy that’s currently guaranteed by end-to-end encryption.

In my prep work before the panel, and then in our conversation, I learned about some new cryptographic technologies that might allow for some content moderation, as well as increased enforcement of platform policies and laws, all *without* breaking encryption. These are sort-of fringe technologies right now, mainly still in the research phase. Though they are being developed in several different flavors, most of these technologies ostensibly enable algorithms to evaluate messages or patterns in their metadata to flag problematic material without having to break encryption or reveal the content of the messages.

Aiming to be first in the world to have the most advanced forms of artificial intelligence while also maintaining control over more than a billion people, elite Chinese scientists and their government have turned to something new, and very old, for inspiration—the human brain.

Equipped with surveillance and visual processing capabilities modelled on human vision, the new “brain” will be more effective, less energy hungry, and will “improve governance,” its developers say. “We call it bionic retina computing,” Gao Wen, a leading artificial intelligence researcher, wrote in the paper “City Brain: Challenges and Solution.”

The API-AI nexus isn’t just for tech enthusiasts; its influence has widespread real-world implications. Consider the healthcare sector, where APIs can allow diagnostic AI algorithms to access patient medical records while adhering to privacy regulations. In the financial sector, advanced APIs can connect risk-assessment AIs to real-time market data. In education, APIs can provide the data backbone for AI algorithms designed to create personalized, adaptive learning paths.

However, this fusion of AI and APIs also raises critical questions about data privacy, ethical use and governance. As we continue to knit together more aspects of our digital world, these concerns will need to be addressed to foster a harmonious and responsible AI-API ecosystem.

We stand at the crossroads of a monumental technological paradigm shift. As AI continues to advance, APIs are evolving in parallel to unlock and amplify this potential. If you’re in the realm of digital products, the message is clear: The future is not just automated; it’s API-fied. Whether you’re a developer, a business leader or an end user, this new age promises unprecedented levels of interaction, personalization and efficiency—but it’s upon us to navigate it responsibly.

In a groundbreaking development, Google’s forthcoming generative AI model, Gemini, has been reported to outshine even the most advanced GPT-4 models on the market. The revelation comes courtesy of SemiAnalysis, a semiconductor research company, which anticipates that by the close of 2024, Gemini could exhibit a staggering 20-fold increase in potency compared to ChatGPT. Gemini…

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here

Enterprises have quickly recognized the power of generative AI to uncover new ideas and increase both developer and non-developer productivity. But pushing sensitive and proprietary data into publicly hosted large language models (LLMs) creates significant risks in security, privacy and governance. Businesses need to address these risks before they can start to see any benefit from these powerful new technologies.

As IDC notes, enterprises have legitimate concerns that LLMs may “learn” from their prompts and disclose proprietary information to other businesses that enter similar prompts. Businesses also worry that any sensitive data they share could be stored online and exposed to hackers or accidentally made public.

Don’t put anything into an AI tool you wouldn’t want to show up in someone else’s query or give hackers access to. While inputting every bit of information you can think of in an innovation project is tempting, you have to be careful. Oversharing proprietary information on a generative AI is a growing concern for companies. You can fall victim to inconsistent messaging and branding and potentially share information that shouldn’t be available to the public. We’re also seeing increased cyber criminals hacking into generative AI platforms.

Generative AI’s knowledge isn’t up to date. So your query results shouldn’t necessarily be taken at face value. It probably won’t know about recent competitive pivots, legislation or compliance updates. Use your expertise to research AI insight to make sure what you’re getting is accurate. And remember, AI bias is prevalent, so it’s just as essential to cross-check research for that, too. Again, this is where having smart, meticulous people on board will help to refine AI insight. They know your industry and organization better than AI and can use queries as a helpful starting point for something bigger.

The promise of AI in innovation is huge, as it unlocks unprecedented efficiency and head-turning output. We’re only seeing the tip of the iceberg as it relates to the promise the technology holds, so lean into it. But do so with governance—no one wants snake tail for dinner.