Toggle light / dark theme

An encryption tool co-created by a University of Cincinnati math professor will soon safeguard the telecommunications, online retail and banking and other digital systems we use every day.

The National Institute of Standards and Technology chose four new encryption tools designed to thwart the next generation of hackers or thieves. One of them, called CRYSTALS-Kyber, is co-created by UC College of Arts and Sciences math professor Jintai Ding.

“It’s not just for today but for tomorrow,” Ding said. “This is information that you don’t want people to know even 30 or 50 years from now.”

To try out our new course (and many others on math and science), go to https://brilliant.org/sabine. You can get started for free, and the first 200 will get 20% off the annual premium subscription.

Physicists have many theories for the beginning of our universe: A big bang, a big bounce, a black hole, a network, a collision of membranes, a gas of strings, and the list goes on. What does this mean? It means we don’t know how the universe began. And the reason isn’t just that we’re lacking data, the reason is that science is reaching its limits when we try to understand the initial condition of the entire universe.

💌 Sign up for my weekly science newsletter. It’s free! ➜ http://sabinehossenfelder.com/
👉 Support me on Patreon ➜ https://www.patreon.com/Sabine.
📖 My new book “Existential Physics” is now on sale ➜ http://existentialphysics.com/
🔗 Join this channel to get access to perks ➜
https://www.youtube.com/channel/UC1yNl2E66ZzKApQdRuTQ4tw/join.

The Poplawski paper about how the universe might have been born from a black hole is here: https://link.springer.com/article/10.1007/s10714-021-02790-7

Mathematical models suggest that with just a few more genes, it might be possible to define hundreds of cellular identities, more than enough to populate the tissues of complex organisms. It’s a finding that opens the door to experiments that could bring us closer to understanding how, eons ago, the system that builds us was built.

The Limits of Mutual Repression

Developmental biologists have illuminated many tipping points and chemical signals that prompt cells to follow one developmental pathway or another by studying natural cells. But researchers in the field of synthetic biology often take another approach, explained Michael Elowitz, a professor of biology and bioengineering at Caltech and an author of the new paper: They build a system of cell-fate control from scratch to see what it can tell us about what such systems require.

A new molecule created by a researcher at the University of Texas at Dallas kills a variety of difficult-to-treat cancers, including triple-negative breast cancer, by taking advantage of a weakness in cells that was not previously targeted by existing drugs.

The research, which was conducted using isolated cells, human cancer tissue, and mouse-grown human cancers, was recently published in Nature Cancer.

A co-corresponding author of the study and an associate professor of chemistry and biochemistry in the School of Natural Sciences and Mathematics at the University of Texas at Dallas, Dr. Jung-Mo Ahn has dedicated more than ten years of his career to developing small molecules that target protein-protein interactions in cells. He previously created potential therapeutic candidate compounds for treatment-resistant prostate cancer and breast cancer using a method called structure-based rational drug design.

Meta is developing a machine learning model that scans these citations and cross-references their content to Wikipedia articles to verify that not only the topics line up, but specific figures cited are accurate.

This isn’t just a matter of picking out numbers and making sure they match; Meta’s AI will need to “understand” the content of cited sources (though “understand” is a misnomer, as complexity theory researcher Melanie Mitchell would tell you, because AI is still in the “narrow” phase, meaning it’s a tool for highly sophisticated pattern recognition, while “understanding” is a word used for human cognition, which is still a very different thing).

Meta’s model will “understand” content not by comparing text strings and making sure they contain the same words, but by comparing mathematical representations of blocks of text, which it arrives at using natural language understanding (NLU) techniques.

Today, Oak Ridge National Laboratory’s Frontier supercomputer was crowned fastest on the planet in the semiannual Top500 list. Frontier more than doubled the speed of the last titleholder, Japan’s Fugaku supercomputer, and is the first to officially clock speeds over a quintillion calculations a second—a milestone computing has pursued for 14 years.

That’s a big number. So before we go on, it’s worth putting into more human terms.

Imagine giving all 7.9 billion people on the planet a pencil and a list of simple arithmetic or multiplication problems. Now, ask everyone to solve one problem per second for four and half years. By marshaling the math skills of the Earth’s population for a half-decade, you’ve now solved over a quintillion problems.

Tohoku University scientists in Japan have developed a mathematical description of what happens within tiny magnets as they fluctuate between states when an electric current and magnetic field are applied. Their findings, published in the journal Nature Communications, could act as the foundation for engineering more advanced computers that can quantify uncertainty while interpreting complex data.

Classical computers have gotten us this far, but there are some problems that they cannot address efficiently. Scientists have been working on addressing this by engineering computers that can utilize the laws of quantum physics to recognize patterns in . But these so-called quantum computers are still in their early stages of development and are extremely sensitive to their surroundings, requiring extremely low temperatures to function.

Now, scientists are looking at something different: a concept called probabilistic computing. This type of computer, which could function at , would be able to infer potential answers from complex input. A simplistic example of this type of problem would be to infer information about a person by looking at their purchasing behavior. Instead of the computer providing a single, discrete result, it picks out patterns and delivers a good guess of what the result might be.

Foresight Existential Hope Group.
Program & apply to join: https://foresight.org/existential-hope/

In the Existential Hope-podcast (https://www.existentialhope.com), we invite scientists to speak about long-termism. Each month, we drop a podcast episode where we interview a visionary scientist to discuss the science and technology that can accelerate humanity towards desirable outcomes.

Xhope Special with Foresight Fellow Morgan Levine.

Morgan Levine is a ladder-rank Assistant Professor in the Department of Pathology at the Yale School of Medicine and a member of both the Yale Combined Program in Computational Biology and Bioinformatics, and the Yale Center for Research on Aging. Her work relies on an interdisciplinary approach, integrating theories and methods from statistical genetics, computational biology, and mathematical demography to develop biomarkers of aging for humans and animal models using high-dimensional omics data. As PI or co-Investigator on multiple NIH-, Foundation-, and University-funded projects, she has extensive experience using systems-level and machine learning approaches to track epigenetic, transcriptomic, and proteomic changes with aging and incorporate.

Physicists at the Max Planck Institute of Quantum Optics have managed to entangle more than a dozen photons efficiently and in a defined way. They are thus creating a basis for a new type of quantum computer. Their study is published in Nature.

The phenomena of the quantum world, which often seem bizarre from the perspective of the common everyday world, have long since found their way into technology. For example, entanglement: a quantum-physical connection between particles that links them in a strange way over arbitrarily long distances. It can be used, for example, in a quantum computer—a computing machine that, unlike a conventional computer, can perform numerous mathematical operations simultaneously. However, in order to use a quantum computer profitably, a large number of entangled particles must work together. They are the for calculations, so-called qubits.

“Photons, the particles of light, are particularly well suited for this because they are robust by nature and easy to manipulate,” says Philip Thomas, a doctoral student at the Max Planck Institute of Quantum Optics (MPQ) in Garching near Munich. Together with colleagues from the Quantum Dynamics Division led by Prof. Gerhard Rempe, he has now succeeded in taking an important step towards making usable for technological applications such as quantum computing: For the first time, the team generated up to 14 entangled photons in a defined way and with high efficiency.