Toggle light / dark theme

Going forward into our exponential future…


“By our very nature, we humans are linear thinkers. We evolved to estimate a distance from the predator or to the prey, and advanced mathematics is only a recent evolutionary addition. This is why it’s so difficult even for a modern man to grasp the power of exponentials. 40 steps in linear progression is just 40 steps away; 40 steps in exponential progression is a cool trillion (with a T) – it will take you 3 times from Earth to the Sun and back to Earth.” –Alex M. Vikoulov, The Syntellect Hypothesis: Five Paradigms of the Mind’s Evolution.

Today is a special day for me. My AI assistant Ava scheduled few hours aside from my otherwise busy daily lineup to relive select childhood and adolescence memories recreated in virtual reality with a help of a newly developed AI technique ‘Re: Live’. Ava is my smart home assistant, too. I can rearrange furniture in any room, for example, just by thinking. Digital landscape wallpaper is changed by Ava by knowing my preferences and sensing my moods.

I still like to sleep in an old-fashioned natural way from time to time, even though it’s now optional with accelerated sleep simulation and other sleep bypassing technologies. So, when I opt to sleep, I like falling asleep and waking up on a virtual cloud projected directly to my consciousness, as most VR experiences are streamed via optogenetics.

Read more

A startup called CogitAI has developed a platform that lets companies use reinforcement learning, the technique that gave AlphaGo mastery of the board game Go.

Gaining experience: AlphaGo, an AI program developed by DeepMind, taught itself to play Go by practicing. It’s practically impossible for a programmer to manually code in the best strategies for winning. Instead, reinforcement learning let the program figure out how to defeat the world’s best human players on its own.

Drug delivery: Reinforcement learning is still an experimental technology, but it is gaining a foothold in industry. DeepMind has talked of using it to optimize the performance of data centers and wind turbines. Amazon recently launched a reinforcement-learning platform, but it is aimed more at researchers and academics. CogitAI’s first commercial customers include those working in robotics for drug manufacturing. Its platform lets the robot figure out the optimal way to process drug orders.

Read more

“Monks don’t discuss the true meaning of the Heart Sutra to worshippers; they just read it like poetry,” Kohei Ogawa, a robotics professor at the University of Osaka who worked on the robot, told The Diplomat. “But this doesn’t work. The monks are like robots.”

Androgynous Android

The Mindar android also bends gender, according to The Diplomat, with its human-like face and chest designed to evoke both male and female characteristics.

Read more

Finding the best light-harvesting chemicals for use in solar cells can feel like searching for a needle in a haystack. Over the years, researchers have developed and tested thousands of different dyes and pigments to see how they absorb sunlight and convert it to electricity. Sorting through all of them requires an innovative approach.

Now, thanks to a study that combines the power of supercomputing with and experimental methods, researchers at the U.S. Department of Energy’s (DOE) Argonne National Laboratory and the University of Cambridge in England have developed a novel “design to device” approach to identify promising materials for dye-sensitized solar cells (DSSCs). DSSCs can be manufactured with low-cost, scalable techniques, allowing them to reach competitive performance-to-price ratios.

The team, led by Argonne materials scientist Jacqueline Cole, who is also head of the Molecular Engineering group at the University of Cambridge’s Cavendish Laboratory, used the Theta supercomputer at the Argonne Leadership Computing Facility (ALCF) to pinpoint five high-performing, low-cost dye materials from a pool of nearly 10,000 candidates for fabrication and device testing. The ALCF is a DOE Office of Science User Facility.

Read more

Perceiving an object only visually (e.g. on a screen) or only by touching it, can sometimes limit what we are able to infer about it. Human beings, however, have the innate ability to integrate visual and tactile stimuli, leveraging whatever sensory data is available to complete their daily tasks.

Researchers at the University of Liverpool have recently proposed a new framework to generate cross-modal , which could help to replicate both visual and in situations in which one of the two is not directly accessible. Their framework could, for instance, allow people to perceive objects on a screen (e.g. clothing items on e-commerce sites) both visually and tactually.

“In our daily experience, we can cognitively create a visualization of an object based on a tactile response, or a tactile response from viewing a surface’s texture,” Dr. Shan Luo, one of the researchers who carried out the study, told TechXplore. “This perceptual phenomenon, called synesthesia, in which the stimulation of one sense causes an involuntary reaction in one or more of the other senses, can be employed to make up an inaccessible sense. For instance, when one grasps an object, our vision will be obstructed by the hand, but a touch response will be generated to ‘see’ the corresponding features.”

Read more