Toggle light / dark theme

A new map of dark matter made using artificial intelligence reveals hidden filaments of the invisible stuff bridging galaxies.

The map focuses on the local universe — the neighborhood surrounding the Milky Way. Despite being close by, the local universe is difficult to map because it’s chock full of complex structures made of visible matter, said Donghui Jeong, an astrophysicist at Pennsylvania State University and the lead author of the new research.

“We have to reverse engineer to know where dark matter is by looking at galaxies,” Jeong told Live Science.

I think we should. If it is corrupt or makes mistakes, it will at least be correctable.


LONDON — A study has found that most Europeans would like to see some of their members of parliament replaced by algorithms.

Researchers at IE University’s Center for the Governance of Change asked 2769 people from 11 countries worldwide how they would feel about reducing the number of national parliamentarians in their country and giving those seats to an AI that would have access to their data.

The results, published Thursday, showed that despite AI’s clear and obvious limitations, 51% of Europeans said they were in favor of such a move.

Weird now, but i do think most people will want humanoid robots faking emotions, to some degree, and on the far end people who will want them to try and mimic people exactly.


Columbia Engineering researchers use AI to teach robots to make appropriate reactive human facial expressions, an ability that could build trust between humans and their robotic co-workers and care-givers. (See video below.)

While our facial expressions play a huge role in building trust, most robots still sport the blank and static visage of a professional poker player.

With the increasing use of robots in locations where robots and humans need to work closely together, from nursing homes to warehouses and factories, the need for a more responsive, facially realistic robot is growing more urgent.

Scientists say the system could be used to find ‘hidden gems’ of research and guide research funding allocations.


An artificial intelligence system trained on almost 40 years of the scientific literature correctly identified 19 out of 20 research papers that have had the greatest scientific impact on biotechnology – and has selected 50 recent papers it predicts will be among the ‘top 5%’ of biotechnology papers in the future.1

Scientists say the system could be used to find ‘hidden gems’ of research overlooked by other methods, and even to guide decisions on funding allocations so that it will be most likely to target promising research.

But it’s sparked outrage among some members of the scientific community, who claim it will entrench existing biases.

Between the rolled eyes, shrugged shoulders, jazzed hands and warbling vocal inflection, it’s not hard to tell when someone’s being sarcastic as they’re giving you the business face to face. Online, however, you’re going to need that SpongeBob meme and a liberal application of the shift key to get your contradictory point across. Lucky for us netizens, DARPA’s Information Innovation Office (I2O) has collaborated with researchers from the University of Central Florida to develop a deep learning AI capable of understanding written sarcasm with a startling degree of accuracy.

“With the high velocity and volume of social media data, companies rely on tools to analyze data and to provide customer service. These tools perform tasks such as content management, sentiment analysis, and extraction of relevant messages for the company’s customer service representatives to respond to,” UCF Adjunct Professor of Industrial Engineering and Management Systems, Ivan Garibay, told Engadget via email. “However, these tools lack the sophistication to decipher more nuanced forms of language such as sarcasm or humor, in which the meaning of a message is not always obvious and explicit. This imposes an extra burden on the social media team, which is already inundated with customer messages to identify these messages and respond appropriately.”

As they explain in a study published in the journal, Entropy, Garibay and UCF PhD student Ramya Akula have built “an interpretable deep learning model using multi-head self-attention and gated recurrent units. The multi-head self-attention module aids in identifying crucial sarcastic cue-words from the input, and the recurrent units learn long-range dependencies between these cue-words to better classify the input text.”

Amid the rapid digitalisation of China’s economy, the second-biggest in the world, Midea’s factory represents a snapshot of the future – one in which manufacturing processes and employees need to adapt to increased automation and machine-driven learning.


Machines are increasingly taking over China’s assembly lines as manufacturers upgrade and prepare for fewer, higher-skilled workers.

Researchers from Zurich have developed a compact, energy-efficient device made from artificial neurons that is capable of decoding brainwaves. The chip uses data recorded from the brainwaves of epilepsy patients to identify which regions of the brain cause epileptic seizures. This opens up new perspectives for treatment.

Current neural network algorithms produce impressive results that help solve an incredible number of problems. However, the used to run these algorithms still require too much processing power. These artificial intelligence (AI) systems simply cannot compete with an actual brain when it comes to processing sensory information or interactions with the environment in real time.