Toggle light / dark theme

By Ariana Mendible

For the past several years, I have been closely involved with the Institute for the Quantitative Study of Inclusion, Diversity and Equity (QSIDE). This nonprofit organizes events and facilitates research in quantitative justice, the application of data and mathematical sciences to quantify, analyze and address social injustice. It uses the community-based participatory action research model to connect like-minded scholars, community partners, and activists together. Recently, QSIDE researchers met virtually in a Research Roundup to share our progress. Hearing all the incredible work that QSIDE has spawned and supported prompted me to reflect on the role that the group has played in my budding career and the ways in which the institute itself has grown since its founding in 2019.

Like many PhD candidates, my final year of graduate school was rife with burnout and uncertainty about post-graduation plans. Add to this mix a global pandemic, social isolation, and confinement to the same one-bedroom dwelling for the last year plus and you get a stew of anxiety. I was approaching my mental limit on the research I had been conducting, somewhere at the intersection of data science and fluid dynamics. While the problem I had been working on for my thesis was interesting, I was ready for a major change. I couldn’t picture myself in the usual post-graduate tracks: a post-doc at an R1 institution or working for a Big Tech company. These careers felt hyper-competitive, a turn-off during a period of significant burnout. I also couldn’t see their direct positive impact, which felt acutely important in this time of global social disarray.

Samuele Ferracin1,2, Akel Hashim3,4, Jean-Loup Ville3, Ravi Naik3,4, Arnaud Carignan-Dugas1, Hammam Qassim1, Alexis Morvan3,4, David I. Santiago3,4, Irfan Siddiqi3,4,5, and Joel J. Wallman1,2

1Keysight Technologies Canada, Kanata, ON K2K 2W5, Canada 2 Department of Applied Mathematics, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada 3 Quantum Nanoelectronics Laboratory, Dept. of Physics, University of California at Berkeley, Berkeley, CA 94,720, USA 4 Applied Math and Computational Research Division, Lawrence Berkeley National Lab, Berkeley, CA 94,720, USA 5 Materials Sciences Division, Lawrence Berkeley National Lab, Berkeley, CA 94,720, USA

Get full text pdfRead on arXiv Vanity.

Read & tell me what you think 🙂


There is a rift between near and long-term perspectives on AI safety – one that has stirred controversy. Longtermists argue that we need to prioritise the well-being of people far into the future, perhaps at the expense of people alive today. But their critics have accused the Longtermists of obsessing on Terminator-style scenarios in concert with Big Tech to distract regulators from more pressing issues like data privacy. In this essay, Mark Bailey and Susan Schneider argue that we shouldn’t be fighting about the Terminator, we should be focusing on the harm to the mind itself – to our very freedom to think.

There has been a growing debate between near and long-term perspectives on AI safety – one that has stirred controversy. “Longtermists” have been accused of being co-opted by Big Tech and fixating on science fiction-like Terminator-style scenarios to distract regulators from the real, more near-term, issues, such as algorithmic bias and data privacy.

Longtermism is an ethical theory that requires us to consider the effects of today’s decisions on all of humanity’s potential futures. It can lead to extremes, as it concludes that one should sacrifice the present wellbeing of humanity for the good of humanity’s potential futures. Many Longtermists believe humans will ultimately lose control of AI, as it will become “superintelligent”, outthinking humans in every domain – social acumen, mathematical abilities, strategic thinking, and more.

By Gitta Kutyniok

The recent unprecedented success of foundation models like GPT-4 has heightened the general public’s awareness of artificial intelligence (AI) and inspired vivid discussion about its associated possibilities and threats. In March 2023, a group of technology leaders published an open letter that called for a public pause in AI development to allow time for the creation and implementation of shared safety protocols. Policymakers around the world have also responded to rapid advancements in AI technology with various regulatory efforts, including the European Union (EU) AI Act and the Hiroshima AI Process.

One of the current problems—and consequential dangers—of AI technology is its unreliability and subsequent lack of trustworthiness. In recent years, AI-based technologies have often encountered severe issues in terms of safety, security, privacy, and responsibility with respect to fairness and interpretability. Privacy violations, unfair decisions, unexplainable results, and accidents involving self-driving cars are all examples of concerning outcomes.

Conventional encryption methods rely on complex mathematical algorithms and the limits of current computing power. However, with the rise of quantum computers, these methods are becoming increasingly vulnerable, necessitating quantum key distribution (QKD).

QKD is a technology that leverages the unique properties of quantum physics to secure data transmission. This method has been continuously optimized over the years, but establishing large networks has been challenging due to the limitations of existing quantum light sources.

In a new article published in Light: Science & Applications, a team of scientists in Germany have achieved the first intercity QKD experiment with a deterministic single-photon source, revolutionizing how we protect our confidential information from cyber threats.

Number theorists have been trying to prove a conjecture about the distribution of prime numbers for more than 160 years.

By Manon Bischoff

The Riemann hypothesis is the most important open question in number theory—if not all of mathematics. It has occupied experts for more than 160 years. And the problem appeared both in mathematician David Hilbert’s groundbreaking speech from 1900 and among the “Millennium Problems” formulated a century later. The person who solves it will win a million-dollar prize.

This essay addresses Cartesian duality and how its implicit dialectic might be repaired using physics and information theory. Our agenda is to describe a key distinction in the physical sciences that may provide a foundation for the distinction between mind and matter, and between sentient and intentional systems. From this perspective, it becomes tenable to talk about the physics of sentience and ‘forces’ that underwrite our beliefs (in the sense of probability distributions represented by our internal states), which may ground our mental states and consciousness. We will refer to this view as Markovian monism, which entails two claims: fundamentally, there is only one type of thing and only one type of irreducible property (hence monism). All systems possessing a Markov blanket have properties that are relevant for understanding the mind and consciousness: if such systems have mental properties, then they have them partly by virtue of possessing a Markov blanket (hence Markovian). Markovian monism rests upon the information geometry of random dynamic systems. In brief, the information geometry induced in any system—whose internal states can be distinguished from external states—must acquire a dual aspect. This dual aspect concerns the (intrinsic) information geometry of the probabilistic evolution of internal states and a separate (extrinsic) information geometry of probabilistic beliefs about external states that are parameterised by internal states. We call these intrinsic (i.e., mechanical, or state-based) and extrinsic (i.e., Markovian, or belief-based) information geometries, respectively. Although these mathematical notions may sound complicated, they are fairly straightforward to handle, and may offer a means through which to frame the origins of consciousness.

Keywords: consciousness, information geometry, Markovian monism.