Toggle light / dark theme

The manipulation of electromagnetic waves and information has become an important part of our everyday lives. Intelligent metasurfaces have emerged as smart platforms for automating the control of wave-information-matter interactions without manual intervention. They evolved from engineered composite materials, including metamaterials and metasurfaces. As a society, we have seen significant progress in the development of metamaterials and metasurfaces of various forms and properties.

In a paper published in the journal eLight on May 6, 2022, Professor Tie Jun Cui of Southeast University and Professor Lianlin Li of Peking University led a research team to review intelligent metasurfaces. “Intelligent metasurfaces: Control, Communication and Computing” investigated the development of intelligent metasurfaces with an eye for the future.

This field has refreshed human insights into many fundamental laws. They have unlocked many novel devices and systems, like cloaking, tunneling, and holograms. Conventional structure-alone or passive metasurfaces has moved towards intelligent metasurfaces by integrating algorithms and nonlinear materials (or active devices).

The latest “machine scientist” algorithms can take in data on dark matter, dividing cells, turbulence, and other situations too complicated for humans to understand and provide an equation capturing the essence of what’s going on.


Despite rediscovering Kepler’s third law and other textbook classics, BACON remained something of a curiosity in an era of limited computing power. Researchers still had to analyze most data sets by hand, or eventually with Excel-like software that found the best fit for a simple data set when given a specific class of equation. The notion that an algorithm could find the correct model for describing any data set lay dormant until 2009, when Lipson and Michael Schmidt, roboticists then at Cornell University, developed an algorithm called Eureqa.

Their main goal had been to build a machine that could boil down expansive data sets with column after column of variables to an equation involving the few variables that actually matter. “The equation might end up having four variables, but you don’t know in advance which ones,” Lipson said. “You throw at it everything and the kitchen sink. Maybe the weather is important. Maybe the number of dentists per square mile is important.”

One persistent hurdle to wrangling numerous variables has been finding an efficient way to guess new equations over and over. Researchers say you also need the flexibility to try out (and recover from) potential dead ends. When the algorithm can jump from a line to a parabola, or add a sinusoidal ripple, its ability to hit as many data points as possible might get worse before it gets better. To overcome this and other challenges, in 1992 the computer scientist John Koza proposed “genetic algorithms,” which introduce random “mutations” into equations and test the mutant equations against the data. Over many trials, initially useless features either evolve potent functionality or wither away.

Would start with scanning and reverse engineering brains of rats, crows, pigs, chimps, and end on the human brain. Aim for completion by 12/31/2025. Set up teams to run brain scans 24÷7÷365 if we need to, and partner w/ every major neuroscience lab in the world.


If artificial intelligence is intended to resemble a brain, with networks of artificial neurons substituting for real cells, then what would happen if you compared the activities in deep learning algorithms to those in a human brain? Last week, researchers from Meta AI announced that they would be partnering with neuroimaging center Neurospin (CEA) and INRIA to try to do just that.

Through this collaboration, they’re planning to analyze human brain activity and deep learning algorithms trained on language or speech tasks in response to the same written or spoken texts. In theory, it could decode both how human brains —and artificial brains—find meaning in language.

By comparing scans of human brains while a person is actively reading, speaking, or listening with deep learning algorithms given the same set of words and sentences to decipher, researchers hope to find similarities as well as key structural and behavioral differences between brain biology and artificial networks. The research could help explain why humans process language much more efficiently than machines.

Quantum machine learning is a field of study that investigates the interaction of concepts from quantum computing with machine learning.

For example, we would wish to see if quantum computers can reduce the amount of time it takes to train or assess a machine learning model. On the other hand, we may use machine learning approaches to discover quantum error-correcting algorithms, estimate the features of quantum systems, and create novel quantum algorithms.

Interactive tools that allow online media users to navigate, save and customize graphs and charts may help them make better sense of the deluge of data that is available online, according to a team of researchers. These tools may help users identify personally relevant information, and check on misinformation, they added.

In a study advancing the concept of “news informatics,” which provides news in the form of data rather than stories, the researchers reported that people found that offered certain interactive tools—such as modality, message and source interactivity tools—to visualize and manipulate data were more engaging than ones without the tools. Modality interactivity includes tools to interact with the content, such as hyperlinks and zoom-ins, while message interactivity focuses on how the users exchange messages with the site. Source interactivity allows users to tailor the information to their individual needs and contribute their own content to the site.

However, it was not the case that more is always better, according to S. Shyam Sundar, James P. Jimirro Professor of Media Effects in the Donald P. Bellisario College of Communications and co-director of the Media Effects Research Laboratory at Penn State. The user’s experience depended on how these tools were combined and how involved they are in the topic, he said.

Consciousness defines our existence. It is, in a sense, all we really have, all we really are, The nature of consciousness has been pondered in many ways, in many cultures, for many years. But we still can’t quite fathom it.

web1Why consciousness cannot have evolved

Consciousness Cannot Have Evolved Read more Consciousness is, some say, all-encompassing, comprising reality itself, the material world a mere illusion. Others say consciousness is the illusion, without any real sense of phenomenal experience, or conscious control. According to this view we are, as TH Huxley bleakly said, ‘merely helpless spectators, along for the ride’. Then, there are those who see the brain as a computer. Brain functions have historically been compared to contemporary information technologies, from the ancient Greek idea of memory as a ‘seal ring’ in wax, to telegraph switching circuits, holograms and computers. Neuroscientists, philosophers, and artificial intelligence (AI) proponents liken the brain to a complex computer of simple algorithmic neurons, connected by variable strength synapses. These processes may be suitable for non-conscious ‘auto-pilot’ functions, but can’t account for consciousness.

Finally there are those who take consciousness as fundamental, as connected somehow to the fine scale structure and physics of the universe. This includes, for example Roger Penrose’s view that consciousness is linked to the Objective Reduction process — the ‘collapse of the quantum wavefunction’ – an activity on the edge between quantum and classical realms. Some see such connections to fundamental physics as spiritual, as a connection to others, and to the universe, others see it as proof that consciousness is a fundamental feature of reality, one that developed long before life itself.

Simon Portegies Zwart, an astrophysicist at Leiden University in the Netherlands, says more efficient coding is vital for making computing greener. While for mathematician and physicist Loïc Lannelongue, the first step is for computer modellers to become more aware of their environmental impacts, which vary significantly depending on the energy mix of the country hosting the supercomputer. Lannelongue, who is based at the University of Cambridge, UK, has developed Green Algorithms, an online tool that enables researchers to estimate the carbon footprint of their computing projects.

Decentralized finance is built on blockchain technology, an immutable system that organizes data into blocks that are chained together and stored in hundreds of thousands of nodes or computers belonging to other members of the network.

These nodes communicate with one another (peer-to-peer), exchanging information to ensure that they’re all up-to-date and validating transactions, usually through proof-of-work or proof-of-stake. The first term is used when a member of the network is required to solve an arbitrary mathematical puzzle to add a block to the blockchain, while proof-of-stake is when users set aside some cryptocurrency as collateral, giving them a chance to be selected at random as a validator.

To encourage people to help keep the system running, those who are selected to be validators are given cryptocurrency as a reward for verifying transactions. This process is popularly known as mining and has not only helped remove central entities like banks from the equation, but it also has allowed DeFi to open more opportunities. In traditional finance, are only offered to large organizations, for members of the network to make a profit. And by using network validators, DeFi has also been able to cut down the costs that intermediaries charge so that management fees don’t eat away a significant part of investors’ returns.