Toggle light / dark theme

New research, published in the journal Patterns and led by the University of Glasgow’s School of Psychology and Neuroscience, uses 3D modeling to analyze the way Deep Neural Networks—part of the broader family of machine learning—process , to visualize how their information processing matches that of humans.

It is hoped this new work will pave the way for the creation of more dependable AI technology that will process information like humans and make errors that we can understand and predict.

One of the challenges still facing AI development is how to better understand the process of machine thinking, and whether it matches how humans process information, in order to ensure accuracy. Deep Neural Networks are often presented as the current best of decision-making behavior, achieving or even exceeding human performance in some tasks. However, even deceptively simple visual discrimination tasks can reveal clear inconsistencies and errors from the AI models, when compared to humans.

Today, Microsoft announced that Microsoft Translator, its AI-powered text translation service, now supports more than 100 different languages and dialects. With the addition of 12 new languages including Georgian, Macedonian, Tibetan, and Uyghur, Microsoft claims that Translator can now make text and information in documents accessible to 5.66 billion people worldwide.

Its Translator isn’t the first to support more than 100 languages — Google Translate reached that milestone first in February 2016. (Amazon Translate only supports 71.) But Microsoft says that the new languages are underpinned by unique advances in AI and will be available in the Translator apps, Office, and Translator for Bing, as well as Azure Cognitive Services Translator and Azure Cognitive Services Speech.

“One hundred languages is a good milestone for us to achieve our ambition for everyone to be able to communicate regardless of the language they speak,” Microsoft Azure AI chief technology officer Xuedong Huang said in a statement. “We can leverage [commonalities between languages] and use that … to improve whole language famil[ies].”

2021 is only halfway complete, and we cannot yet be said to have defeated the pandemic, but yet at the same time, the travel and tourism industry is said to be poised for a pretty rapid boom. In many ways and places, the recovery has already begun.

A live Globaldata poll showed that people are desperate to enjoy travels and trips again with a majority of them opting for longer trips than before. 26% of their respondents showed a desire to enjoy trips that spanned a minimum of 10 nights. As lockdowns and travel restrictions continue to be eased and countries continue to open up, we will likely see a surge in new tourists and travelers.

Jason Fong, a veteran of the industry, is the brain behind the Boss of Bali brand, a brand that has garnered over 2 million followers on Instagram. Fong shared his knowledge of all things tourism and how he has used his platform to promote the evolution of travel and tourism more sustainably.

The historic mission is still going strong.


A Chinese lander and rover are still up and running more than 1,000 Earth days after they made a historic first-ever landing on the far side of the moon.

The Chang’e 4 lander carrying the Yutu 2 rover touched down in Von Kármán Crater on Jan. 2 2019, and the robotic mission has been exploring the unique area of our celestial neighbor ever since.

Elon Musk said: In the future, you will be able to save and replay memories by Neuralink This is sounding increasingly like a Black Mirror episode.

Elon Musk says you will even be able to store your memories as a backup, and then download them into a robot body. It sounds like science fiction but Neuralink believes it could make that happen one day with a chip.

It is a brain-machine interface or BMI – a device that connects your brain to a computer. But before chasing these futuristic goals, the startup is focusing on one thing: ending human suffering.

The Pentagon’s first chief software officer said he resigned in protest at the slow pace of technological transformation in the US military, and because he could not stand to watch China overtake America.

In his first interview since leaving the post at the Department of Defense a week ago, Nicolas Chaillan told the Financial Times that the failure of the US to respond to Chinese cyber and other threats was putting his children’s future at risk.

Full Story:


Please use the sharing tools found via the share button at the top or side of articles. Copying articles to share with others is a breach of FT.com.

Speaking of cars, consider the future of transportation and mobility, entailing the advent of self-driving cars.

It would seem that self-driving cars will be a welcomed boon to humanity. Predictions are that the regrettable 40,000 annual fatalities due to car crashes in the United States alone will be reduced enormously, and likewise, the estimated 2.3 million car crash injuries will nearly disappear.

What’s not to like about the emergence of self-driving cars?

That brings up this intriguing question: Could the advent of AI-based true self-driving cars somehow get intermingled into the act of bothering a neighbor?

This seems like a rather curious question and defies the aura of goodness that surrounds the self-driving car realm.

As the medical community’s understanding of the application of augmented intelligence (AI) in health care grows, there remains the question of how AI—often called artificial intelligence—should be incorporated into physician training. The term augmented intelligence is preferred because it recognizes the enhancement, rather than replacement, of human capabilities.

Understanding how AI can affect patients may help learners appreciate its relevance, he noted, adding that the National Board of Medical Examiners exam now tests physicians-in-training on health systems science, and there are questions about health care AI specifically.

But AI doesn’t just relate to systems issues. It also has a home within evidence-based medicine (EBM).

Full Story:


Imperial researchers have found that variability between brain cells might speed up learning and improve the performance of the brain and future artificial intelligence (AI).

The new study found that by tweaking the electrical properties of individual cells in simulations of brain networks, the networks learned faster than simulations with identical cells.

They also found that the networks needed fewer of the tweaked cells to get the same results and that the method is less energy-intensive than models with identical cells.

Full Story: