Toggle light / dark theme

A radically new view articulated now by a number of digital philosophers is that consciousness, quantum computational and non-local in nature, is resolutely computational, and yet, has some “non-computable” properties. Consider this: English language has 26 letters and about 1 million words, so how many books could be possibly written in English? If you are to build a hypothetical computer containing all mass and energy of our Universe and ask it this question, the ultimate computer wouldn’t be able to compute the exact number of all possible combinations of words into meaningful story-lines in billions of years! Another example of non-computability of combinatorics: if you are to be born and live your own life again and again in our Quantum Multiverse, you could live googolplex (10100) lives, but they all would be somewhat different — some of them drastically different from the life you’re living right now, some only slightly — never quite the same, and timeline-indeterminate.

Another kind of non-computability is akin to fuzzy logic but based on pattern recognition. Deeper understanding refers to a situation when a conscious agent gets to perceive numerous patterns in complex environments and analyze that complexity from the multitude of perspectives. That is beautifully encapsulated by Isaiah Berlin’s quote: “To understand is to perceive patterns.” The ability to recognize patterns in chaos is not straightforwardly algorithmic but rather meta-algorithmic and yet, I’d argue, deeply computational. The types of non-computability that I just described may somehow relate to the non-computable element of quantum consciousness to which Penrose refers in his work.

Picture

Read more

Computer scientists the University of Melbourne in Australia and the University of Toronto in Canada have developed an algorithm that is capable of writing poetry following the rules of rhyme and metre.

With the use of poetries rules and taking the metre into account, this AI algorithm creates weaves of words and grouped them together to produce meaningful sentences.

This AI is trained extensively on the rules it needed to follow to craft an acceptable poem and the dataset researcher used to train the AI has over 2,600 real sonnets.

Read more

Researchers trained the 165-pound ‘humanoid robot’ to walk across narrow terrain by using human-like control, perception and planning algorithms. The video shows the robot, called Atlas, carefully moving across a balance beam using body control created using LIDAR…


Researchers from the Institute for Human & Machine Cognition in Florida have created a robot that uses a planning algorithm to balance its way across an uneven path of cinder blocks.

Read more

3D printing is moving ever closer to gaining a true home in mainstream commercial applications, thanks to the impact the technology is having on consumer fashion products such as jewelry, footwear, and clothing. While 3D printed fashion was still considered to be more of a novelty a few years ago, efforts have been increasing to make it more common – even in the classroom. Additionally, the technology is helping to usher in a more sustainable and eco-friendly way of manufacturing garments…and designer Julia Daviy is helping to lead the charge.

In addition to designing clothes, Daviy is also an ecologist and clean technology industry manager, and uses 3D printing to make cruelty-free, zero-waste clothing. She believes that the technology will change how the world produces clothing, especially when it comes to some of the more problematic issues of garment manufacturing, such as animal exploitation, chemical pollution, energy consumption, and material waste.

“Our goal was never to demonstrate the viability of 3D printed clothing and leave things at that. We’ll have succeeded when beautiful, comfortable, ethically manufactured and environmentally friendly clothes are the standard,” Daviy stated. “The innovations we’ve made on the production and marketing side of the equation are just as important as the technological breakthroughs that have gotten us this far.”

Read more

We live in a world of wireless signals flowing around us and bouncing off our bodies. MIT researchers are now leveraging those signal reflections to provide scientists and caregivers with valuable insights into people’s behavior and health.

The system, called Marko, transmits a low-power radio-frequency (RF) signal into an environment. The signal will return to the system with certain changes if it has bounced off a moving human. Novel algorithms then analyze those changed reflections and associate them with specific individuals.

The system then traces each individual’s movement around a digital floor plan. Matching these movement patterns with other data can provide insights about how people interact with each other and the environment.

Read more

A machine learning algorithm can detect signs of anxiety and depression in the speech patterns of young children, potentially providing a fast and easy way of diagnosing conditions that are difficult to spot and often overlooked in young people, according to new research published in the Journal of Biomedical and Health Informatics.

Around one in five suffer from anxiety and depression, collectively known as “internalizing disorders.” But because children under the age of eight can’t reliably articulate their emotional suffering, adults need to be able to infer their mental state, and recognise potential mental health problems. Waiting lists for appointments with psychologists, insurance issues, and failure to recognise the symptoms by parents all contribute to children missing out on vital treatment.

“We need quick, objective tests to catch kids when they are suffering,” says Ellen McGinnis, a at the University of Vermont Medical Center’s Vermont Center for Children, Youth and Families and lead author of the study. “The majority of kids under eight are undiagnosed.”

Read more

For most patients, a diagnosis of stage 4 non-small cell lung cancer comes with a dire prognosis. But for patients with specific mutations that cause the disease, there are potentially life-saving therapies.

The problem is that these mutations, known as ALK and EGFR, are not always identified in patients — meaning they never get the treatment.

A new study from the Fred Hutchinson Cancer Research Center in Seattle used machine learning to find these needle-in-a-haystack patients. The idea was to leverage cancer databases to see if patients were being tested for the mutations and receiving these personalized treatments.

Read more

So why not ask the neurons what they want to see?

Read: The human remembering machine

That was the idea behind XDREAM, an algorithm dreamed up by a Harvard student named Will Xiao. Sets of those gray, formless images, 40 in all, were shown to watching monkeys, and the algorithm tweaked and shuffled those that provoked the strongest responses in chosen neurons to create a new generation of pics. Xiao had previously trained XDREAM using 1.4 million real-world photos so that it would generate synthetic images with the properties of natural ones. Over 250 such generations, the synthetic images became more and more effective, until they were exciting their target neurons far more intensely than any natural image. “It was exciting to finally let a cell tell us what it’s encoding instead of having to guess,” says Ponce, who is now at Washington University in St. Louis.

Read more