Toggle light / dark theme

Collision review: How CERN’s stellar secrets became sci-fi gold

Edited by Rob Appleby and Connie Potter (Comma Press)

IN The Ogre, the Monk and the Maiden, Margaret Drabble’s ingenious story for the new sci-fi anthology Collision, a character called Jaz works on “the interface of language and quantum physics”. Jaz’s speciality is “the speaking of the inexpressible”. Science fiction authors have long grappled with translating cutting-edge research – much of it grounded in what Drabble calls “the Esperanto of Equations” – into everyday language and engaging plots.

Using Asteroids As Spaceships

Compare news coverage. Spot media bias. Avoid algorithms. Be well informed. Download the free Ground News app at https://ground.news/isaacarthur.
Asteroids may serve as future bases and colonies for humanity as we travel into space, but could they also be converted into spaceships to take us strange new worlds around distant stars?

Visit our Website: http://www.isaacarthur.net.
Join Nebula: https://go.nebula.tv/isaacarthur.
Support us on Patreon: https://www.patreon.com/IsaacArthur.
Support us on Subscribestar: https://www.subscribestar.com/isaac-arthur.
Facebook Group: https://www.facebook.com/groups/1583992725237264/
Reddit: https://www.reddit.com/r/IsaacArthur/
Twitter: https://twitter.com/Isaac_A_Arthur on Twitter and RT our future content.
SFIA Discord Server: https://discord.gg/53GAShE

Listen or Download the audio of this episode from Soundcloud: Episode’s Audio-only version: https://soundcloud.com/isaac-arthur-148927746/using-asteroids-as-spaceships.
Episode’s Narration-only version: https://soundcloud.com/isaac-arthur-148927746/using-asteroid…ation-only.

Credits:
Using Asteroids As Spaceships.
Science & Futurism with Isaac Arthur.
Episode 379, January 26, 2023
Written, Produced & Narrated by Isaac Arthur.

Editors:
Briana Brownell.
David McFarlane.
Donagh B.

Graphics by:

Quantum Safe Cryptography — A Quantum Leap Needed Now

Whether we realize it or not, cryptography is the fundamental building block on which our digital lives are based. Without sufficient cryptography and the inherent trust that it engenders, every aspect of the digital human condition we know and rely on today would never have come to fruition much less continue to evolve at its current staggering pace. The internet, digital signatures, critical infrastructure, financial systems and even the remote work that helped the world limp along during the recent global pandemic all rely on one critical assumption – that the current encryption employed today is unbreakable by even the most powerful computers in existence. But what if that assumption was not only challenged but realistically compromised?

This is exactly what happened when Peter Shor proposed his algorithm in 1995, dubbed Shor’s Algorithm. The key to unlocking the encryption on which today’s digital security relies is in finding the prime factors of large integers. While factoring is relatively simple with small integers that have only a few digits, factoring integers that have thousands of digits or more is another matter altogether. Shor proposed a polynomial-time quantum algorithm to solve this factoring problem. I’ll leave it to the more qualified mathematicians to explain the theory behind this algorithm but suffice it to say that when coupled with a quantum computer, Shor’s Algorithm drastically reduces the time it would take to factor these larger integers by multiple orders of magnitude.

Prior to Shor’s Algorithm, for example, the most powerful computer today would take millions of years to find the prime factors of a 2048-bit composite integer. Without Shor’s algorithm, even quantum computers would take such an inordinate amount of time to accomplish the task as to render it unusable by bad actors. With Shor’s Algorithm, this same factoring can potentially be accomplished in a matter of hours.

A robot able to ‘smell’ using a biological sensor

A new technological development by Tel Aviv University has made it possible for a robot to smell using a biological sensor. The sensor sends electrical signals as a response to the presence of a nearby odor, which the robot can detect and interpret.

In this new study, the researchers successfully connected the to an electronic system and, using a machine learning algorithm, were able to identify odors with a level of sensitivity 10,000 times higher than that of a commonly used electronic device. The researchers believe that in light of the success of their research, this technology may also be used in the future to identify explosives, drugs, diseases, and more.

The biological and was led by doctoral student Neta Shvil of Tel Aviv University’s Sagol School of Neuroscience, Dr. Ben Maoz of the Fleischman Faculty of Engineering and the Sagol School of Neuroscience, and Prof. Yossi Yovel and Prof. Amir Ayali of the School of Zoology and the Sagol School of Neuroscience. The results of the study were published in Biosensors and Bioelectronics.

Large Language Model: world models or surface statistics?

Large Language Models (LLM) are on fire, capturing public attention by their ability to provide seemingly impressive completions to user prompts (NYT coverage). They are a delicate combination of a radically simplistic algorithm with massive amounts of data and computing power. They are trained by playing a guess-the-next-word game with itself over and over again. Each time, the model looks at a partial sentence and guesses the following word. If it makes it correctly, it will update its parameters to reinforce its confidence; otherwise, it will learn from the error and give a better guess next time.

While the underpinning training algorithm remains roughly the same, the recent increase in model and data size has brought about qualitatively new behaviors such as writing basic code or solving logic puzzles.

How do these models achieve this kind of performance? Do they merely memorize training data and reread it out loud, or are they picking up the rules of English grammar and the syntax of C language? Are they building something like an internal world model—an understandable model of the process producing the sequences?

Luwu Dynamics

Luwu Dynamics products are quadruped robot dogs with 12 degrees of freedom. They are used for teenagers to learn Artificial Intelligence Programming. They can realize omni-directional movement, six-dimensional attitude control, attitude stability and a variety of motion gait. They are internally equipped with 9-axis IMU, joint position sensor and current sensor to feed back their own attitude, joint angle and torque for internal algorithm and secondary development, It can off-line AI functions such as face recognition, image classification, gesture recognition, speech recognition, audio analysis and target tracking, and supports cross platform graphical and python programming.

luwu

Meat cultivated from cow cells is kosher, Israel’s chief rabbi rules

JERUSALEM, Jan 19 (Reuters) — Israel’s chief rabbi has given a kosher stamp of approval this week to a company looking to sell steak grown from cow cells — while effectively taking the animal itself out of the equation.

Cultivated meat, grown from animal cells in a lab or manufacturing plant, has been getting a lot of attention as a way to sidestep the environmental toll of the meat industry and address concerns over animal welfare.

This method, however, has raised questions over religious restrictions, like kashrut in Judaism or Islam’s halal.

Quantum Computing with Neutral Atoms

Why the recent surge in jaw-dropping announcements? Why are neutral atoms seeming to leapfrog other qubit modalities? Keep reading to find out.

The table below highlights the companies working to make Quantum Computers using neutral atoms as qubits:

And as an added feature I am writing this post to be “entangled” with the posts of Brian Siegelwax, a respected colleague and quantum algorithm designer. My focus will be on the hardware and corporate details about the companies involved, while Brian’s focus will be on actual implementation of the platforms and what it is like to program on their devices. Unfortunately, most of the systems created by the companies noted in this post are not yet available (other than QuEra’s), so I will update this post along with the applicable hot links to Brian’s companion articles, as they become available.

ChatGPT Is a Mirror of Our Times

Computers and information technologies were once hailed as a revolution in education. Their benefits are undeniable. They can provide students with far more information than a mere textbook. They can make educational resources more flexible, tailored to individual needs, and they can render interactions between students, parents, and teachers fast and convenient. And what would schools have done during the pandemic lockdowns without video conferencing?

The advent of AI chatbots and large language models such as OpenAI’s ChatGPT, launched last November, create even more new opportunities. They can give students practice questions and answers as well as feedback, and assess their work, lightening the load on teachers. Their interactive nature is more motivating to students than the imprecise and often confusing information dumps elicited by Google searches, and they can address specific questions.

The algorithm has no sense that “love” and “embrace” are semantically related.