Toggle light / dark theme

What Constitutes Your Stream of Consciousness?

It wouldn’t shock me if all the buzz around searching for the ‘locus of consciousness’ merely fine-tunes our grasp of how the brain is linked to consciousness — without actually revealing where consciousness comes from, because it’s not generated in the brain. Similarly, your smartphone doesn’t create the Internet or a cellular network; it just processes them. Networks of minds are a common occurrence throughout the natural world. What sets humans apart is the impending advent of cybernetic connectivity explosion that could soon evolve into a form of synthetic telepathy, eventually leading to the rise of a unified, global consciousness — what could be termed the Syntellect Emergence.

#consciousness #phenomenology #cybernetics #cognition #neuroscience


In summary, the study of consciousness could be conceptualized through a variety of lenses: as a series of digital perceptual snapshots, as a cybernetic system with its feedback processes, as a grand theater; or perhaps even as a VIP section in a cosmological establishment of magnificent complexity. Today’s leading theories of consciousness are largely complementary, not mutually exclusive. These multiple perspectives not only contribute to philosophical discourse but also herald the dawn of new exploratory avenues, equally enthralling and challenging, in our understanding of consciousness.

In The Cybernetic Theory of Mind (2022), I expand on existing theories to propose certain conceptual models and concepts, such as Noocentrism, Digital Presentism (D-Theory of Time), Experiential Realism, Ontological Holism, Multi-Ego Pantheistic Solipsism, the Omega Singularity, deeming a non-local consciousness, or Universal Mind, as the substrate of objective reality. In search of God’s equation, we finally look upward for the source. What many religions call “God” is clearly an interdimensional being within the nested levels of complexity. Besides setting initial conditions for our universe, God speaks to us in the language of religion, spirituality, synchronicities and transcendental experiences.

Quantum Computers Could Crack Encryption Sooner Than Expected With New Algorithm

One of the most well-established and disruptive uses for a future quantum computer is the ability to crack encryption. A new algorithm could significantly lower the barrier to achieving this.

Despite all the hype around quantum computing, there are still significant question marks around what quantum computers will actually be useful for. There are hopes they could accelerate everything from optimization processes to machine learning, but how much easier and faster they’ll be remains unclear in many cases.

One thing is pretty certain though: A sufficiently powerful quantum computer could render our leading cryptographic schemes worthless. While the mathematical puzzles underpinning them are virtually unsolvable by classical computers, they would be entirely tractable for a large enough quantum computer. That’s a problem because these schemes secure most of our information online.

Instant evolution: AI designs new robot from scratch in seconds

A team led by Northwestern University researchers has developed the first artificial intelligence (AI) to date that can intelligently design robots from scratch.

To test the new AI, the researchers gave the system a simple prompt: Design a robot that can walk across a . While it took nature billions of years to evolve the first walking species, the compressed to lightning speed—designing a successfully walking robot in mere seconds.

But the AI program is not just fast. It also runs on a lightweight and designs wholly novel structures from scratch. This stands in sharp contrast to other AI systems, which often require energy-hungry supercomputers and colossally large datasets. And even after crunching all that data, those systems are tethered to the constraints of human creativity—only mimicking humans’ past works without an ability to generate new ideas.

Pythagorean Theorem Found On Clay Tablet 1,000 Years Older Than Pythagoras

Study math for long enough and you will likely have cursed Pythagoras’s name, or said “praise be to Pythagoras” if you’re a bit of a fan of triangles.

But while Pythagoras was an important historical figure in the development of mathematics, he did not figure out the equation most associated with him (a2 + b2 = c2). In fact, there is an ancient Babylonian tablet (by the catchy name of IM 67118) which uses the Pythagorean theorem to solve the length of a diagonal inside a rectangle. The tablet, likely used for teaching, dates from 1770 BCE – centuries before Pythagoras was born in around 570 BCE.

Another tablet from around 1800–1600 BCE has a square with labeled triangles inside. Translating the markings from base 60 – the counting system used by ancient Babylonians – showed that these ancient mathematicians were aware of the Pythagorean theorem (not called that, of course) as well as other advanced mathematical concepts.

AI Can Predict Future Heart Attacks By Analyzing CT Scans

An artificial intelligence platform developed by an Israeli startup can reveal whether a patient is at risk of a heart attack by analyzing their routine chest CT scans.

Results from a new study testing Nanox. AI’s HealthCCSng algorithm on such scans found that 58 percent of patients unknowingly had moderate to severe levels of coronary artery calcium (CAC) or plaque.

CAC is the strongest predictor of future cardiac events, and measuring it typically subjects patients to an additional costly scan that is not normally covered by insurance companies.

Humanities and Social Sciences Communications

This article explores the research question: ‘What are ChatGPT’s human-like traits as perceived by society?’ Thematic analyses of insights from 452 individuals worldwide yielded two categories of traits. Category 1 entails social traits, where ChatGPT embodies the social roles of ‘author’ (imitating human phrasing and paraphrasing practices) and ‘interactor’ (simulating human collaboration and emotion). Category 2 encompasses political traits, with ChatGPT assuming the political roles of ‘agent’ (emulating human cognition and identity) and ‘influencer’ (mimicking human diplomacy and consultation). When asked, ChatGPT confirmed the possession of these human-like traits (except for one trait). Thus, ChatGPT displays human-like qualities, humanising itself through the ‘game of algorithms’.

PhD student solves a mysterious ancient Sanskrit text algorithm after 2,500 years

These stylistic choices make the Aṣṭādhyāyī shorter and easier to memorize than it would be otherwise — some historians believe it was initially composed orally — but also incredibly dense. That density leads to rule conflicts, in which two rules may apply simultaneously to the same word yet produce different outcomes.

Pāṇini did provide a meta-rule to solve such conflicts. According to traditional scholarship, this meta-rule states that “in the event of a conflict between two rules of equal strength, the rule that comes later in the serial order of the Aṣṭādhyāyī wins.”

Seems simple enough. But when applied, this meta-rule yields many exceptions. To correct those exceptions, scholars have for centuries created their own meta-rules. However, those meta-rules yielded even more exceptions, which required the creation of additional meta-rules (meta-meta-rules?). Those meta-rules in turn created even more exceptions — and you see where this is going.

How AI and Machine Learning Are Transforming Liver Disease Diagnosis and Treatment

AI can also help develop objective risk stratification scores, predict the course of disease or treatment outcomes in CLD or liver cancer, facilitate easier and more successful liver transplantation, and develop quality metrics for hepatology.


Artificial Intelligence (AI) is an umbrella term that covers all computational processes aimed at mimicking and extending human intelligence for problem-solving and decision-making. It is based on algorithms or arrays of mathematical formulae that make up specific computational learning methods. Machine learning (ML) and deep learning (DL) use algorithms in more complex ways to predict learned and new outcomes.

AI-powered liver disease diagnosis Machine learning for treatment planning Predicting disease progression The future of hepatology References Further reading

Hepatology largely depends on imaging, a field that AI can fully exploit. Machine learning is being pressed into play to extract rich information from imaging and clinical data to aid the non-invasive and accurate diagnosis of multiple liver conditions.

AI language models can exceed PNG and FLAC in lossless compression, says study

Effective compression is about finding patterns to make data smaller without losing information. When an algorithm or model can accurately guess the next piece of data in a sequence, it shows it’s good at spotting these patterns. This links the idea of making good guesses—which is what large language models like GPT-4 do very well —to achieving good compression.

In an arXiv research paper titled “Language Modeling Is Compression,” researchers detail their discovery that the DeepMind large language model (LLM) called Chinchilla 70B can perform lossless compression on image patches from the ImageNet image database to 43.4 percent of their original size, beating the PNG algorithm, which compressed the same data to 58.5 percent. For audio, Chinchilla compressed samples from the LibriSpeech audio data set to just 16.4 percent of their raw size, outdoing FLAC compression at 30.3 percent.

In this case, lower numbers in the results mean more compression is taking place. And lossless compression means that no data is lost during the compression process. It stands in contrast to a lossy compression technique like JPEG, which sheds some data and reconstructs some of the data with approximations during the decoding process to significantly reduce file sizes.

Quantum Material Exhibits “Non-Local” Behavior That Mimics Brain Function

We often believe computers are more efficient than humans. After all, computers can complete a complex math equation in a moment and can also recall the name of that one actor we keep forgetting. However, human brains can process complicated layers of information quickly, accurately, and with almost no energy input: recognizing a face after only seeing it once or instantly knowing the difference between a mountain and the ocean. These simple human tasks require enormous processing and energy input from computers, and even then, with varying degrees of accuracy.

Creating brain-like computers with minimal energy requirements would revolutionize nearly every aspect of modern life. Funded by the Department of Energy, Quantum Materials for Energy Efficient Neuromorphic Computing (Q-MEEN-C) — a nationwide consortium led by the University of California San Diego — has been at the forefront of this research.

UC San Diego Assistant Professor of Physics Alex Frañó is co-director of Q-MEEN-C and thinks of the center’s work in phases. In the first phase, he worked closely with President Emeritus of University of California and Professor of Physics Robert Dynes, as well as Rutgers Professor of Engineering Shriram Ramanathan. Together, their teams were successful in finding ways to create or mimic the properties of a single brain element (such as a neuron or synapse) in a quantum material.