Toggle light / dark theme

Is this the breakthrough the world has been waiting for from the Search for Extraterrestrial Intelligence Institute?

A scientist, Peter Ma, has applied machine learning and artificial intelligence to data collected by the Search for Extraterrestrial Intelligence (SETI) Institute, a press statement reveals.

Algorithm finds 8 promising signals that could be of alien origin.


Honglouwawa/iStock.

Based on initial results, there is a slight chance the new method may have unearthed non-Earth-based “technosignatures”. That would mean it had achieved SETI’s goal of finding signs of extraterrestrial intelligence.

As neural networks become more powerful, algorithms have become capable of turning ordinary text into images, animations and even short videos. These algorithms have generated significant controversy. An AI-generated image recently won first prize in an annual art competition while the Getty Images stock photo library is currently taking legal action against the developers of an AI art algorithm that it believes was unlawfully trained using Getty’s images.

So the music equivalent of these systems shouldn’t come as much surprise. And yet the implications are extraordinary.

A group of researchers at Google have unveiled an AI system capable of turning ordinary text descriptions into rich, varied and relevant music. The company has showcased these capabilities using descriptions of famous artworks to generate music.

There are two aspects to a computer’s power: the number of operations its hardware can execute per second and the efficiency of the algorithms it runs. The hardware speed is limited by the laws of physics. Algorithms—basically sets of instructions —are written by humans and translated into a sequence of operations that computer hardware can execute. Even if a computer’s speed could reach the physical limit, computational hurdles remain due to the limits of algorithms.

These hurdles include problems that are impossible for computers to solve and problems that are theoretically solvable but in practice are beyond the capabilities of even the most powerful versions of today’s computers imaginable. Mathematicians and computer scientists attempt to determine whether a problem is solvable by trying them out on an imaginary machine.

The dynamics of electrons submitted to voltage pulses in a thin semiconductor layer is investigated using a kinetic approach based on the solution of the electron Boltzmann equation using particle-in-cell/Monte Carlo collision simulations. The results showed that due to the fairly high plasma density, oscillations emerge from a highly nonlinear interaction between the space-charge field and the electrons. The voltage pulse excites electron waves with dynamics and phase-space trajectories that depend on the doping level. High-amplitude oscillations take place during the relaxation phase and are subsequently damped over time-scales in the range 100 400 fs and decrease with the doping level. The power spectra of these oscillations show a high-energy band and a low-energy peak that were attributed to bounded plasma resonances and to a sheath effect. The high-energy THz domain reduces to sharp and well-defined peaks for the high doping case. The radiative power that would be emitted by the thin semiconductor layer strongly depends on the competition between damping and radiative decay in the electron dynamics. Simulations showed that higher doping level favor enhanced magnitude and much slower damping for the high-frequency current, which would strongly enhance the emitted level of THz radiation.

Edited by Rob Appleby and Connie Potter (Comma Press)

IN The Ogre, the Monk and the Maiden, Margaret Drabble’s ingenious story for the new sci-fi anthology Collision, a character called Jaz works on “the interface of language and quantum physics”. Jaz’s speciality is “the speaking of the inexpressible”. Science fiction authors have long grappled with translating cutting-edge research – much of it grounded in what Drabble calls “the Esperanto of Equations” – into everyday language and engaging plots.

Compare news coverage. Spot media bias. Avoid algorithms. Be well informed. Download the free Ground News app at https://ground.news/isaacarthur.
Asteroids may serve as future bases and colonies for humanity as we travel into space, but could they also be converted into spaceships to take us strange new worlds around distant stars?

Visit our Website: http://www.isaacarthur.net.
Join Nebula: https://go.nebula.tv/isaacarthur.
Support us on Patreon: https://www.patreon.com/IsaacArthur.
Support us on Subscribestar: https://www.subscribestar.com/isaac-arthur.
Facebook Group: https://www.facebook.com/groups/1583992725237264/
Reddit: https://www.reddit.com/r/IsaacArthur/
Twitter: https://twitter.com/Isaac_A_Arthur on Twitter and RT our future content.
SFIA Discord Server: https://discord.gg/53GAShE

Listen or Download the audio of this episode from Soundcloud: Episode’s Audio-only version: https://soundcloud.com/isaac-arthur-148927746/using-asteroids-as-spaceships.
Episode’s Narration-only version: https://soundcloud.com/isaac-arthur-148927746/using-asteroid…ation-only.

Credits:
Using Asteroids As Spaceships.
Science & Futurism with Isaac Arthur.
Episode 379, January 26, 2023
Written, Produced & Narrated by Isaac Arthur.

Editors:
Briana Brownell.
David McFarlane.
Donagh B.

Graphics by:

Whether we realize it or not, cryptography is the fundamental building block on which our digital lives are based. Without sufficient cryptography and the inherent trust that it engenders, every aspect of the digital human condition we know and rely on today would never have come to fruition much less continue to evolve at its current staggering pace. The internet, digital signatures, critical infrastructure, financial systems and even the remote work that helped the world limp along during the recent global pandemic all rely on one critical assumption – that the current encryption employed today is unbreakable by even the most powerful computers in existence. But what if that assumption was not only challenged but realistically compromised?

This is exactly what happened when Peter Shor proposed his algorithm in 1995, dubbed Shor’s Algorithm. The key to unlocking the encryption on which today’s digital security relies is in finding the prime factors of large integers. While factoring is relatively simple with small integers that have only a few digits, factoring integers that have thousands of digits or more is another matter altogether. Shor proposed a polynomial-time quantum algorithm to solve this factoring problem. I’ll leave it to the more qualified mathematicians to explain the theory behind this algorithm but suffice it to say that when coupled with a quantum computer, Shor’s Algorithm drastically reduces the time it would take to factor these larger integers by multiple orders of magnitude.

Prior to Shor’s Algorithm, for example, the most powerful computer today would take millions of years to find the prime factors of a 2048-bit composite integer. Without Shor’s algorithm, even quantum computers would take such an inordinate amount of time to accomplish the task as to render it unusable by bad actors. With Shor’s Algorithm, this same factoring can potentially be accomplished in a matter of hours.

A new technological development by Tel Aviv University has made it possible for a robot to smell using a biological sensor. The sensor sends electrical signals as a response to the presence of a nearby odor, which the robot can detect and interpret.

In this new study, the researchers successfully connected the to an electronic system and, using a machine learning algorithm, were able to identify odors with a level of sensitivity 10,000 times higher than that of a commonly used electronic device. The researchers believe that in light of the success of their research, this technology may also be used in the future to identify explosives, drugs, diseases, and more.

The biological and was led by doctoral student Neta Shvil of Tel Aviv University’s Sagol School of Neuroscience, Dr. Ben Maoz of the Fleischman Faculty of Engineering and the Sagol School of Neuroscience, and Prof. Yossi Yovel and Prof. Amir Ayali of the School of Zoology and the Sagol School of Neuroscience. The results of the study were published in Biosensors and Bioelectronics.

Large Language Models (LLM) are on fire, capturing public attention by their ability to provide seemingly impressive completions to user prompts (NYT coverage). They are a delicate combination of a radically simplistic algorithm with massive amounts of data and computing power. They are trained by playing a guess-the-next-word game with itself over and over again. Each time, the model looks at a partial sentence and guesses the following word. If it makes it correctly, it will update its parameters to reinforce its confidence; otherwise, it will learn from the error and give a better guess next time.

While the underpinning training algorithm remains roughly the same, the recent increase in model and data size has brought about qualitatively new behaviors such as writing basic code or solving logic puzzles.

How do these models achieve this kind of performance? Do they merely memorize training data and reread it out loud, or are they picking up the rules of English grammar and the syntax of C language? Are they building something like an internal world model—an understandable model of the process producing the sequences?