Toggle light / dark theme

Mind-body philosophy | solving the hard problem of consciousness.

Recent advances in science and technology have allowed us to reveal — and in some cases even alter — the innermost workings of the human body. With electron microscopes, we can see our DNA, the source code of life itself. With nanobots, we can send cameras throughout our bodies and deliver drugs directly into the areas where they are most needed. We are even using artificially intelligent robots to perform surgeries on ourselves with unprecedented precision and accuracy.

Materialism says that the cosmos, and all that is contains, is an objective physical reality. As a result, philosophers who subscribe to this school of thought assert that consciousness, and all that it entails, arises from material interactions. As such, the material world (our flesh, neurons, synapse, etc.) is what creates consciousness.

Idealism says that the universe is entirely subjective and that reality is something that is mentally constructed. In other words, consciousness is something that is immaterial and cannot be observed or measured empirically. Since consciousness is what creates the material world, according to this school of thought, it is unclear if we can ever truly know anything that is mind-independent and beyond our subjective experience.

Dualism essentially holds that mental phenomena are, in some respects, non-physical in nature. In this respect, the mind and the body exist, but they are distinct and separable.

Although most modern philosophers subscribe to the materialist view, determining, and ultimately understanding, the nature of human consciousness using an empirical methodology is a remarkably difficult task. The primary issue with accomplishing the aforementioned is that empirical science requires things to be measured objectively. And when it comes to consciousness, everything is subjective.

In recent decades, machine learning and deep learning algorithms have become increasingly advanced, so much so that they are now being introduced in a variety of real-world settings. In recent years, some computer scientists and electronics engineers have been exploring the development of an alternative type of artificial intelligence (AI) tools, known as diffractive optical neural networks.

Diffractive optical neural networks are deep neural networks based on diffractive optical technology (i.e., lenses or other components that can alter the phase of light propagating through them). While these networks have been found to achieve ultra-fast computing speeds and high energy efficiencies, typically they are very difficult to program and adapt to different use cases.

Researchers at Southeast University, Peking University and Pazhou Laboratory in China have recently developed a diffractive deep neural network that can be easily programmed to complete different tasks. Their network, introduced in a paper published in Nature Electronics, is based on a flexible and multi-layer array.

Atomic clocks are the best sensors mankind has ever built. Today, they can be found in national standards institutes or satellites of navigation systems. Scientists all over the world are working to further optimize the precision of these clocks. Now, a research group led by Peter Zoller, a theorist from Innsbruck, Austria, has developed a new concept that can be used to operate sensors with even greater precision irrespective of which technical platform is used to make the sensor. “We answer the question of how precise a sensor can be with existing control capabilities, and give a recipe for how this can be achieved,” explain Denis Vasilyev and Raphael Kaubrügger from Peter Zoller’s group at the Institute of Quantum Optics and Quantum Information at the Austrian Academy of Sciences in Innsbruck.

For this purpose, the physicists use a method from processing: Variational quantum algorithms describe a circuit of quantum gates that depends on free parameters. Through optimization routines, the sensor autonomously finds the best settings for an optimal result. “We applied this technique to a problem from metrology—the science of measurement,” Vasilyev and Kaubrügger explain. “This is exciting because historically advances in were motivated by metrology, and in turn emerged from that. So, we’ve come full circle here,” Peter Zoller says. With the new approach, scientists can optimize quantum sensors to the point where they achieve the best possible precision technically permissible.

Your appliances, car and home are designed to make your life easier and automate tasks you perform daily: switch lights on and off when you enter and exit a room, remind you that your tomatoes are about to go bad, personalize the temperature of the house depending on the weather and preferences of each person in the household.

To do their magic, they need the internet to reach out for help and correlate data. Without internet access, your smart thermostat can collect data about you, but it doesn’t know what the weather forecast is, and it isn’t powerful enough to process all of the information to decide what to do.

But it’s not just the things in your home that are communicating over the internet. Workplaces, malls and cities are also becoming smarter, and the smart devices in those places have similar requirements. In fact, the Internet of Things (IoT) is already widely used in transport and logistics, agriculture and farming, and industry automation. There were around 22 billion internet-connected devices in use around the world in 2018, and the number is projected to grow to over 50 billion by 2030.

How to robotically build a human habitat in space…

Happening now.


Accelerate the accessibility and commercialization of cislunar space through cost-effective, habitable, scalable Infrastructure.

A talk with Sebastian Asprella CEO at ThinkOrbital: a commercial space-platform developer with a mission to accelerate the commercialization of cislunar space, focusing on On-orbit servicing, assembly and manufacturing technologies. Their flagship space-platform, the Orb2, is designed for a single-launch on-orbit assembly model, capable of delivering an internal spherical volume of up to 4000m3.

Kinova, a Canadian company that specializes in robotic arms, is launching Link 6, a new generation industrial robot designed for all businesses looking to benefit from automation.

The Link 6 collaborative robot features automation solutions that enable greater daily efficiency while improving the quality and consistency of production results. Kinova’s newest robot helps you start producing faster thanks to a rich interface on its wrist, feed-through of power and data, optional Gigabit Ethernet adapter, and optional wrist vision module.

The company says its Link 6 controller provides the highest processing power and memory capacity on the market, making it ready to use with the AI solutions of the future while keeping the size of the controller compact. Link 6 robotic arm is developed and designed with any user in mind: an experienced industrial integrator and an operator with no particular robotic skills.

Translator, a Microsoft Azure Cognitive Service, is adopting Z-code Mixture of Experts models, a breakthrough AI technology that significantly improves the quality of production translation models. As a component of Microsoft’s larger XYZ-code initiative to combine AI models for text, vision, audio, and language, Z-code supports the creation of AI systems that can speak, see, hear, and understand. This effort is a part of Azure AI and Project Turing, focusing on building multilingual, large-scale language models that support various production teams. Translator is using NVIDIA GPUs and Triton Inference Server to deploy and scale these models efficiently for high-performance inference. Translator is the first machine translation provider to introduce this technology live for customers.

Z-code MoE boosts efficiency and quality

Z-code models utilize a new architecture called Mixture of Experts (MoE), where different parts of the models can learn different tasks. The models learn to translate between multiple languages at the same time. The Z-code MoE model utilizes more parameters while dynamically selecting which parameters to use for a given input. This enables the model to specialize a subset of the parameters (experts) during training. At runtime, the model uses the relevant experts for the task, which is more computationally efficient than utilizing all model’s parameters.

Black holes with masses equivalent to millions of suns do put a brake on the birth of new stars, say astronomers. Using machine learning and three state-of-the-art simulations to back up results from a large sky survey, researchers from the University of Cambridge have resolved a 20-year long debate on the formation of stars.

Star formation in galaxies has long been a focal point of astronomy research. Decades of successful observations and theoretical modeling resulted in our good understanding of how gas collapses to form new stars both in and beyond our own Milky Way. However, thanks to all-sky observing programs like the Sloan Digital Sky Survey (SDSS), astronomers realized that not all galaxies in the local Universe are actively star-forming—there exists an abundant population of “quiescent” objects which form stars at significantly lower rates.

The question of what stops star formation in galaxies remains the biggest unknown in our understanding of galaxy evolution, debated over the past 20 years. Joanna Piotrowska and her team at the Kavli Institute for Cosmology set up an experiment to find out what might be responsible.

The six-foot drone is made by by ZALA Aero, a subsidiary of famed Russian arms manufacturer Kalashnikov. After being fired from a portable launcher the KUB-BLA can loiter over a target area for up to half an hour, flying at speeds of around 80mph.

Once it has recognised a suitable target it deliberately crashes into it, detonating its seven-pound high explosive payload.