Toggle light / dark theme

To evaluate the performance of robotics algorithms and controllers, researchers typically use software simulations or real physical robots. While these may appear as two distinct evaluation strategies, there is a whole other range of possibilities that combine elements of both.

In a recent study, researchers at Texas A&M University and the University of South Carolina have set out to examine evaluation and execution scenarios that lie at an intersection between simulations and real implementations. Their investigation, outlined in a paper pre-published on arXiv, specifically focuses on instances in which real robots perceive the world via their sensors, where the environment they sense could be seen as a mere illusion.

“We consider problems in which robots conspire to present a view of the world that differs from reality,” Dylan Shell and Jason O’Kane, the researchers who carried out the study, wrote in their paper. “The inquiry is motivated by the problem of validating robot behavior physically despite there being a discrepancy between the robots we have at hand and those we wish to study, or the environment for testing that is available versus that which is desired, or other potential mismatches in this vein.”

Researchers at Hefei University of Technology in China and various universities in Japan have recently developed a unique emotion sensing system that can recognize people’s emotions based on their body gestures. They presented this new AI- powered system, called EmoSense, in a paper pre-published on arXiv.

“In our , we can clearly realize that body gestures contain rich mood expressions for ,” Yantong Wang, one of the researchers who carried out the study, told TechXplore. “Meanwhile, we can also find out that human body gestures affect via shadowing and multi-path effects when we use antennas to detect behavior. Such signal effects usually form unique patterns or fingerprints in the temporal-frequency domain for different gestures.”

Wang and his colleagues observed that human body gestures can affect wireless signals, producing characteristic patterns that could be used for emotion recognition. This inspired them to develop a system that can identify these patterns, recognizing people’s emotions based on their physical movements.

Downloading your brain may seem like science fiction, but some neuroscientists think it’s not only possible, but that we’ve already started down a path to one day make it a reality. So, how close are we to downloading a human brain?

How Close Are We to Fusion Energy? — https://youtu.be/ZW_YCWLyv6A

We’ve Put a Worm’s Mind in a Lego Robot’s Body
https://www.smithsonianmag.com/smart-news/weve-put-worms-min…399/?no-is
“A wheeled Lego robot may not look like a worm, but it ‘thinks’ like one after programmers gave it the neuron connections in a C. elegans roundworm”

Crumb of Mouse Brain Reconstructed in Full Detail

Self-driving vehicles will lead to a rise in car sex, according to a new study.

People will be more likely to eat, sleep and engage in on-the-road hanky-panky when robot cars become the new normal, according to research published in the most recent issue of the journal Annals of Tourism Research.

“People will be sleeping in their vehicles, which has implications for roadside hotels. And people may be eating in vehicles that function as restaurant pods,” Scott Cohen, who led the study, told Fast Company magazine.

Hey Google, how many enemy combatants remain standing?


British soldiers could soon be offered a “military Alexa” digital assistant which would provide troops in the field with automated information from combat tactics to repair instructions.

The Ministry of Defence has awarded a £700,000 contract to a British technology company to explore the feasibility of an artificial intelligence “chatbot” which will allow soldiers on deployment to obtain crucial information via computer link.

The specification for the AI system requires that it be accessible via military “tactical radios” and handheld devices, suggesting that it could even be used by troops on the frontline engaged in combat to access intelligence and vital information. Although initially text based, the chatbot could also be further developed to give instructions by voice.

Prof. Steve Fuller is the author of 25 books including a trilogy relating to the idea of a ‘post-’ or ‘trans-‘human future, and most recently, Nietzschean Meditations: Untimely Thoughts at the Dawn of the Transhuman Age.

During this 2h 15 min interview with Steve Fuller we cover a variety of interesting topics such as: the social foundations of knowledge and our shared love of books; Transhumanism as a scientistic way of understanding who we are; the proactionary vs the precautionary principle; Pierre Teilhard de Chardin and the Omega Point; Julian and Aldous Huxley’s diverging takes on Transhumanism; David Pearce’s Hedonistic Imperative as a concept straight out of Brave New World; the concept and meaning of being human, transhuman and posthuman; humanity’s special place in the cosmos; my Socratic Test of (Artificial) Intelligence; Transhumanism as a materialist theology – i.e. religion for geeks; Elon Musk, cosmism and populating Mars; de-extinction, genetics and the sociological elements of a given species; the greatest issues that humanity is facing today; AI, the Singularity and armed conflict; morphological freedom and becoming human; longevity and the “Death is Wrong” argument; Zoltan Istvan and the Transhumanist Wager; Transhumanism as a way of entrenching rather than transcending one’s original views…

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Particle physicists are planning the successor to CERN’s Large Hadron Collider – but how will they deal with the deluge of data from a future machine and the proliferation of theoretical models? Michela Massimi explains why a new scientific methodology called “model independence” could hold the answer.

It’s been an exciting few months for particle physicists. In May more than 600 researchers gathered in Granada, Spain, to discuss the European Particle Physics Strategy, while in June CERN held a meeting in Brussels, Belgium, to debate plans for the Future Circular Collider (FCC). This giant machine – 100 km in circumference and earmarked for the Geneva lab – is just one of several different projects (including those in astroparticle physics and machine learning) that particle physicists are working on to explore the frontiers of high-energy physics.

CERN’s Large Hadron Collider (LHC) has been collecting data from vast numbers of proton–proton collisions since 2010 – first at an energy of 8 TeV and then 13 TeV during its second run. These have enabled scientists on the ATLAS and CMS experiments at the LHC to discover the Higgs boson in 2012, while light has also been shed on other vital aspects of the Standard Model of particle physics.