Toggle light / dark theme

The era of AI-human hybrid intelligence

You hear a lot these days about the potential for impending doom as AI becomes ever smarter.

Indeed, big names are calling for caution: the futurist optimism of protagonists like Ray Kurzweil is outweighed by the concern expressed by Bill Gates, Elon Musk and Stephen Hawking. And Swedish philosopher Nick Bostrom’s scary thought experiments around what AI might lead to could well sustain a new strain of Nordic noir. There are, indeed, reasons to be concerned.

The fictional Hal’s refusal to open the pod bay doors in Kubrick’s 2001: A Space Odyssey seems a lot less like fiction than it did when the movie came out almost 50 years ago. Today, we have real reason to be concerned about the potential for autonomous drones making decisions about who to take out, or self-driving cars making a choice between hitting a roadside tree and hitting a child.

How the brain produces consciousness in ‘time slices’

EPFL scientists propose a new way of understanding of how the brain processes unconscious information into our consciousness. According to the model, consciousness arises only in time intervals of up to 400 milliseconds, with gaps of unconsciousness in between.

The driver ahead suddenly stops, and you find yourself stomping on your breaks before you even realize what is going on. We would call this a reflex, but the underlying reality is much more complex, forming a debate that goes back centuries: Is consciousness a constant, uninterrupted stream or a series of discrete bits — like the 24 frames-per-second of a movie reel? Scientists from EPFL and the universities of Ulm and Zurich, now put forward a new model of how the brain processes unconscious information, suggesting that consciousness arises only in intervals up to 400 milliseconds, with no consciousness in between. The work is published in PLOS Biology.

Continuous or discrete?

Consciousness seems to work as continuous stream: one image or sound or smell or touch smoothly follows the other, providing us with a continuous image of the world around us. As far as we are concerned, it seems that sensory information is continuously translated into conscious perception: we see objects move smoothly, we hear sounds continuously, and we smell and feel without interruption. However, another school of thought argues that our brain collects sensory information only at discrete time-points, like a camera taking snapshots. Even though there is a growing body of evidence against “continuous” consciousness, it also looks like that the “discrete” theory of snapshots is too simple to be true.

Scientists discover how the brain repurposes itself to learn scientific concepts

The human brain was initially used for basic survival tasks, such as staying safe and hunting and gathering. Yet, 200,000 years later, the same human brain is able to learn abstract concepts, like momentum, energy and gravity, which have only been formally defined in the last few centuries.

New research from Carnegie Mellon University has now uncovered how the brain is able to acquire brand new types of ideas. Published in Psychological Science, scientists Robert Mason and Marcel Just used neural-decoding techniques developed at CMU to identify specific physics concepts that advanced students recalled when prompted. The brain activation patterns while thinking about the physics concepts indicated that all of the students’ brains used the ancient brain systems the same way, and the patterns revealed how the new knowledge was formed — by repurposing existing neural systems.

The findings could be used to improve science instruction.

Is the Universe a Simulation? Scientists Debate

Hmm… That would explain Alzheimer disease — It’d be like some sort of unabashedly evil version of a smart phone data caps!

Or not.

wink


NEW YORK — Is the universe just an enormous, fantastically complex simulation? If so, how could we find out, and what would that knowledge mean for humanity?

These were the big questions that a group of scientists, as well as one philosopher, tackled on April 5 during the 17th annual Isaac Asimov Debate here at the American Museum of Natural History. The event honors Asimov, the visionary science-fiction writer, by inviting experts in diverse fields to discuss pressing questions on the scientific frontiers.

The cognitive era: Wither the machine brain

My own prediction is that we will see singularity with humans 1st via BMI/ BI technology and other bio-computing technology before we see a machine brain operating a the level of a healthy fully funtional human brain.


Since War of the Worlds hit the silver screen, never has the notion that machine intelligence will overtake human intelligence is more real. In this two-part series, the author examines the growing trend towards cognitive machines.

Second digital revolution

Many folks talk about the whole AI revolution; and indeed it does change some things and opens the door the for opportunities. However, has it truly changed the under lying technology? No; AI is still reliant on existing digital technology. The real tech revolution will come in the form of Quantum tech over the next 7 to 8 years; and it will change everything in our lives and industry. Quantum will change everything that we know about technology including devices, medical technologies, communications including the net, security, e-currency, etc. https://lnkd.in/bJnS37r


If you were born in the 1970s or 1980s, you probably remember the Jetsons family. The Jetsons are to the future what the Flintstones are to the past. That futuristic lifestyle vision goes back several decades; self-driving vehicles, robotic home helpers and so on. What looked like a cartoon series built on prolific imagination seems somewhat more real today. Newly developed technologies are becoming available and connecting everything to the internet. This is the-internet-of-things era.

These ‘things’ are not new. They are just standard devices – lights, garage doors, kitchen appliances, household appliances – equipped with a little intelligence. Intelligence that is possible thanks to three emerging technologies: sensors to collect information from surroundings; the ability to control something; and communication capability allowing devices to talk to each other.

Think of cars that park on their own or that brake automatically to avoid a collision; smart assistants that will notify you to leave early for a calendar appointment in case heavy traffic en route; or a robotic vacuum cleaner that starts cleaning once everyone has left the house. This is the Jetsons’ kind of future.

Transcranial direct current stimulation can boost language comprehension: Stimulation of the brain’s left angular gyrus enhanced the comprehension of simple, two-word phrases

How the human brain processes the words we hear and constructs complex concepts is still somewhat of a mystery to the neuroscience community. Transcranial direct current stimulation (tDCS) can alter our language processing, allowing for faster comprehension of meaningful word combinations, according to new research from the department of Neurology the Perelman School of Medicine at the University of Pennsylvania. The work is published in the Journal of Neuroscience.

“Integrating conceptual knowledge is one of the neural functions fundamental to human intelligence,” said the study’s first author Amy Price, a neuroscience graduate student at Penn. “For example, when we read or listen to a sentence, we need to combine, or integrate, the meaning of the words to understand the full idea of the sentence. We perform this process effortlessly on a daily basis but it is quite a complex process and little is known about the brain regions that support this ability.”

Semantic memory is our stored knowledge about the world, such as the meaning of words and objects. “We sought to understand how and in what part of the brain semantic representations are integrated into more complex ideas” said senior author Roy Hamilton, MD, MS, an assistant professor in the departments of Neurology and Physical Medicine & Rehabilitation, and director of the Laboratory for Cognition and Neural Stimulation at Penn. Recent findings from functional MRI scans (fMRI) and magnetoencephalography (MEG) have suggested the angular gyrus, a region of the brain known to be involved in language, number processing and spatial cognition, memory retrieval and attention, as a potential hub for semantic memory integration, specifically the left angular gyrus.

/* */