Toggle light / dark theme

A graduate research assistant at The University of Alabama in Huntsville (UAH), a part of The University of Alabama system, has published a paper in the journal Astronomy & Astrophysics that builds on an earlier study to help understand why the solar corona is so hot compared to the surface of the sun itself.

To shed further light on this age-old mystery, Syed Ayaz, a Ph.D. candidate in the UAH Center for Space Plasma and Aeronomic Research (CSPAR), employed a statistical model known as a Kappa distribution to describe the velocity of particles in space plasmas, while incorporating the interaction of suprathermal particles with kinetic Alfvén waves (KAWs).

KAWs are oscillations of the charged particles and as they move through the , caused by motions in the photosphere, the sun’s outer shell. The waves are a valuable tool for modeling various phenomena in the solar system, including particle acceleration and wave-particle interactions.

Constraining the origin of Earth’s building blocks requires knowledge of the chemical and isotopic characteristics of the source region(s) where these materials accreted. The siderophile elements Mo and Ru are well suited to investigating the mass-independent nucleosynthetic (i.e., “genetic”) signatures of material that contributed to the latter stages of Earth’s formation. Studies contrasting the Mo and Ru isotopic compositions of the bulk silicate Earth (BSE) to genetic signatures of meteorites, however, have reported conflicting estimates of the proportions of the non-carbonaceous type or NC (presumptive inner Solar System origin) and carbonaceous chondrite type or CC (presumptive outer Solar System origin) materials delivered to Earth during late-stage accretion (likely including the Moon-forming event and onwards).

On a clear spring evening in Michigan, the stars aligned — just not in the way Upfront Ventures partner Nick Kim expected.

He’d just led a $9.5 million seed round for OurSky, a software platform for space observational data, and was eager to see what its telescope partner PlaneWave Instruments could do.

But when they rolled out the telescopes that night at PlaneWave’s manufacturing facility, he was stuck waiting.

Professor Graham Oppy discusses the Turing Test, whether AI can understand, whether it can be more ethical than humans, moral realism, AI alignment, incoherence in human value, indirect normativity and much more.

Chapters:
0:00 The Turing test.
6:06 Agentic LLMs.
6:42 Concern about non-anthropocentric intelligence.
7:57 Machine understanding & the Chinese Room argument.
10:21 AI ‘grokking’ — seemingly understanding stuff.
13:06 AI and fact checking.
15:01 Alternative tests for assessing AI capability.
17:35 Moral Turing Tests — Can AI be highly moral?
18:37 Philosophy’s role in AI development.
21:51 Can AI help progress philosophy?
23:48 Increasing percision in the language of philosophy via technoscience.
24:54 Should philosophers be more involved in AI development?
26:59 Moral realism & fining universal principles.
31:02 Empiricism & moral truth.
32:09 Challenges to moral realism.
33:09 Truth and facts.
36:26 Are suffering and pleasure real?
37:54 Signatures of pain.
39:25 AI leaning from morally relevant features of reality.
41:22 AI self-improvement.
42:36 AI mind reading.
43:46 Can AI learn to care via moral realism?
45:42 Bias in AI training data.
46:26 Metaontology.
48:27 Is AI conscious?
49:45 Can AI help resolve moral disagreements?
51:07 ‘Little’ philosophical progress.
54:09 Does the human condition prevent or retard wide spread value convergence?
55:04 Difficulties in AI aligning to incoherent human values.
56:30 Empirically informed alignment.
58:41 Training AI to be humble.
59:42 Paperclip maximizers.
1:00:41 Indirect specification — avoiding AI totalizing narrow and poorly defined goals.
1:02:35 Humility.
1:03:55 Epistemic deference to ‘jupiter-brain’ AI
1:05:27 Indirect normativity — verifying jupiter-brain oracle AI’s suggested actions.
1:08:25 Ideal observer theory.
1:10:45 Veil of ignorance.
1:13:51 Divine psychology.
1:16:21 The problem of evil — an indifferent god?
1:17:21 Ideal observer theory and moral realism.

See Wikipedia article on Graham Oppy: https://en.wikipedia.org/wiki/Graham_https://twitter.com/oppygraham #AI #philosophy #aisafety #ethics Many thanks for tuning in! Please support SciFuture by subscribing and sharing! Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series? Please fill out this form: https://docs.google.com/forms/d/1mr9P… Kind regards, Adam Ford

x: https://twitter.com/oppygraham.

#AI #philosophy #aisafety #ethics.

Many thanks for tuning in!
Please support SciFuture by subscribing and sharing!

Groundbreaking experiments suggest plants might be living, thinking, and feeling entities, challenging our understanding of consciousness.


Imagine walking through a dense forest, feeling the hush of nature all around you. You might assume that the only beings truly aware in that space are the birds in the trees, the insects in the soil, or perhaps yourself. But what if the trees, the flowers, and even the grass beneath your feet are more conscious than we’ve ever given them credit for?

For centuries, science has treated consciousness as a function of the brain—a phenomenon exclusive to creatures with neurons and synapses. Yet recent studies on plant behavior challenge this long-held assumption. Plants exhibit problem-solving skills, communicate through underground networks, and even appear to remember past experiences. Some researchers now argue that consciousness might not be a byproduct of the brain at all, but rather an intrinsic quality of life itself.

If intelligence can emerge without neurons, could consciousness exist beyond the human mind? And if so, does this force extend throughout nature, blurring the lines between what we consider sentient and what we don’t? These questions push the boundaries of both science and spirituality, hinting at a reality far more interconnected—and perhaps more conscious—than we ever imagined.

The spectrum of cosmic-ray antiprotons has been measured for a full solar cycle, which may allow a better understanding of the sources and transport mechanisms of these high-energy particles.

The heliosphere is a region of space extending approximately 122 astronomical units (au) from the Sun (1 au being the average distance between the Sun and Earth). This volume mostly contains plasma originating from the Sun but also various charged particles with higher energies. These particles can be categorized according to their energies and origins: Lower-energy solar energetic particles, for instance, come from the Sun itself, while Jovian electrons have their origin in the magnetosphere of Jupiter. Another such population comes from outside the Solar System: galactic cosmic rays (GCRs), which mostly consist of protons and electrons and their antiparticles and span a vast range of energies from mega-electron-volts to exa-electron-volts [1]. Astonishingly, energies at the high end of this range would correspond to a single particle carrying as much kinetic energy as a well-thrown baseball.

Humans are inching closer to living beyond Earth, but sustaining life on the moon or Mars.

Mars is the second smallest planet in our solar system and the fourth planet from the sun. It is a dusty, cold, desert world with a very thin atmosphere. Iron oxide is prevalent in Mars’ surface resulting in its reddish color and its nickname “The Red Planet.” Mars’ name comes from the Roman god of war.

At the heart of language neuroscience lies a fundamental question: How does the human brain process the rich variety of languages? Recent developments in Natural Language Processing, particularly in multilingual neural network language models, offer a promising avenue to answer this question by providing a theory-agnostic way of representing linguistic content across languages. Our study leverages these advances to ask how the brains of native speakers of 21 languages respond to linguistic stimuli, and to what extent linguistic representations are similar across languages. We combined existing (12 languages across 4 language families; n=24 participants) and newly collected fMRI data (9 languages across 4 language families; n=27 participants) to evaluate a series of encoding models predicting brain activity in the language network based on representations from diverse multilingual language models (20 models across 8 model classes). We found evidence of cross-lingual robustness in the alignment between language representations in artificial and biological neural networks. Critically, we showed that the encoding models can be transferred zero-shot across languages, so that a model trained to predict brain activity in a set of languages can account for brain responses in a held-out language, even across language families. These results imply a shared component in the processing of different languages, plausibly related to a shared meaning space.

The authors have declared no competing interest.