Menu

Blog

Page 1081

Dec 26, 2023

How can we construct a science of consciousness?

Posted by in categories: neuroscience, science

This chapter gives an overview of the projects facing a science of consciousness. Such a science must integrate third-person data about behavior and brain processes with first-person data about conscious experience. Empirical projects for integrating these data include those of contrasting conscious and unconscious processes, investigating the contents of consciousness, finding neural correlates of consciousness, and eventually inferring underlying principles connecting consciousness with physical processes. These projects are discussed with reference to current experimental research on consciousness. Some obstacles that a science of consciousness faces are also discussed.

Dec 26, 2023

Inner Experience — Direct Access to Reality: A Complementarist Ontology and Dual Aspect Monism Support a Broader Epistemology

Posted by in categories: ethics, mathematics, neuroscience

Ontology, the ideas we have about the nature of reality, and epistemology, our concepts about how to gain knowledge about the world, are interdependent. Currently, the dominant ontology in science is a materialist model, and associated with it an empiricist epistemology. Historically speaking, there was a more comprehensive notion at the cradle of modern science in the middle ages. Then “experience” meant both inner, or first person, and outer, or third person, experience. With the historical development, experience has come to mean only sense experience of outer reality. This has become associated with the ontology that matter is the most important substance in the universe, everything else-consciousness, mind, values, etc.,-being derived thereof or reducible to it. This ontology is insufficient to explain the phenomena we are living with-consciousness, as a precondition of this idea, or anomalous cognitions. These have a robust empirical grounding, although we do not understand them sufficiently. The phenomenology, though, demands some sort of non-local model of the world and one in which consciousness is not derivative of, but coprimary with matter. I propose such a complementarist dual aspect model of consciousness and brain, or mind and matter. This then also entails a different epistemology. For if consciousness is coprimary with matter, then we can also use a deeper exploration of consciousness as happens in contemplative practice to reach an understanding of the deep structure of the world, for instance in mathematical or theoretical intuition, and perhaps also in other areas such as in ethics. This would entail a kind of contemplative science that would also complement our current experiential mode that is exclusively directed to the outside aspect of our world. Such an epistemology might help us with various issues, such as good theoretical and other intuitions.

Keywords: complementarity; consciousness; contemplative science; dual aspect model; epistemology; introspection; materialism; ontology.

Copyright © 2020 Walach.

Dec 26, 2023

2312.13286 (1).Pdf

Posted by in category: futurism

Generative multi modal models are context dependent learners.


Shared with Dropbox.

Dec 26, 2023

Einstein’s Insight: Why Does Gravity Pull Us Down and Not Up?

Posted by in categories: energy, physics, space

Gravity is the reason things with mass or energy are attracted to each other. It is why apples fall toward the ground and planets orbit stars.

Magnets attract some types of metals, but they can also push other magnets away. So how come you feel only the pull of gravity?

Continue reading “Einstein’s Insight: Why Does Gravity Pull Us Down and Not Up?” »

Dec 26, 2023

One Step Closer to Living on Mars: AI Unlocks Secrets of Oxygen Production on the Red Planet

Posted by in categories: chemistry, robotics/AI, solar power, space, sustainability

Immigration to and living on Mars have often been themes in science fiction. Before these dreams can become reality, humanity faces significant challenges, such as the scarcity of vital resources like oxygen needed for long-term survival on the Red Planet. Yet, recent discoveries of water activity on Mars have sparked new hope for overcoming these obstacles.

Scientists are now exploring the possibility of decomposing water to produce oxygen through electrochemical water oxidation driven by solar power with the help of oxygen evolution reaction (OER) catalysts. The challenge is to find a way to synthesize these catalysts in situ using materials on Mars, instead of transporting them from the Earth, which is of high cost.

Dec 26, 2023

The Future of Depression Diagnosis Is Already Here, Expert Says

Posted by in categories: health, robotics/AI

Artificial intelligence (AI) is poised to revolutionise the way we diagnose and treat illness. It could be particularly helpful for depression because it could make more accurate diagnoses and determine which treatments are more likely to work.

Some 20% of us will have depression at least once in our lifetimes. Around the world, 300 million people are currently experiencing depression, with 1.5 million Australians likely to be depressed at any one time.

Because of this, depression has been described by the World Health Organization as the single biggest contributor to ill health around the world.

Dec 26, 2023

China moves to regulate generative AI

Posted by in categories: robotics/AI, space

China’s cyber-space regulator has unveiled draft measures that would make companies responsible for the data used to train generative artificial intelligence (AI) models, such as Midjourney and ChatGPT.

Dec 26, 2023

AI ‘scientist’ re-discovers scientific equations using data

Posted by in categories: information science, robotics/AI

The tool — dubbed ‘AI-Descartes’ by the researchers — aims to speed up scientific discovery by leveraging symbolic regression, which finds equations to fit data.

Given basic operators, such as addition, multiplication, and division, the systems can generate hundreds to millions of candidate equations, searching for the ones that most accurately describe the relationships in the data.

Using this technique, the AI tool has been able to re-discover, by itself, fundamental equations, including Kepler’s third law of planetary motion; Einstein’s relativistic time-dilation law, and Langmuir’s equation of gas adsorption.

Dec 26, 2023

How A Laser-Plasma Collider Can Turn Light Into Matter In A Matter Of Seconds

Posted by in categories: particle physics, quantum physics

A team led by researchers at Osaka University and University of California, San Diego has conducted simulations of creating matter solely from collisions of light particles. Their method circumvents what would otherwise be the intensity limitations of modern lasers and can be readily implemented by using presently available technology. This work might help experimentally test long-standing theories such as the Standard Model of particle physics, and possibly the need to revise them.

One of the most striking predictions of quantum physics is that matter can be generated solely from light (i.e., photons), and in fact, the astronomical bodies known as pulsars achieve this feat. Directly generating matter in this manner has not been achieved in a laboratory, but it would enable further testing of the theories of basic quantum physics and the fundamental composition of the universe.

In a study published in Physical Review Letters, a team led by researchers at Osaka University has simulated conditions that enable photon–photon collisions, solely by using lasers. The simplicity of the setup and ease of implementation at presently available laser intensities make it a promising candidate for near-future experimental implementation.

Dec 26, 2023

This AI Paper from China Introduces Emu2: A 37 Billion Parameter Multimodal Model Redefining Task Solving and Adaptive Reasoning

Posted by in categories: robotics/AI, transportation

Any activity that requires comprehension and production in one or more modalities is considered a multimodal task; these activities can be extremely varied and lengthy. It is challenging to scale previous multimodal systems because they rely heavily on gathering a large supervised training set and developing task-specific architecture, which must be repeated for every new task. In contrast, present multimodal models have not mastered people’s ability to learn new tasks in context, meaning that they can do so with minimal demonstrations or instructions. Generative pretrained language models have recently shown impressive skills in learning from context.

New research by researchers from Beijing Academy of Artificial Intelligence, Tsinghua University, and Peking University introduces Emu2, a 37-billion-parameter model, trained and evaluated on several multimodal tasks. Their findings show that when scaled up, a multimodal generative pretrained model can learn similarly in context and generalize well to new multimodal tasks. The objective of the predict-the-next-multimodal-element (textual tokens or visual embeddings) is the only one used during Emu2’s training. This unified generative pretraining technique trains models by utilizing large-scale multimodal sequences, such as text, image-text pairs, and interleaved image-text video.

The Emu2 model is generative and multimodal; it learns in a multimodal setting to predict the next element. Visual Encoder, Multimodal Modeling, and Visual Decoder are the three main parts of Emu2’s design. To prepare for autoregressive multimodal modeling, the Visual Encoder tokenizes all input images into continuous embeddings, subsequently interleaved with text tokens. The Visual Decoder turns the regressed visual embeddings into a movie or image.