Commercial Holograms may soon get into the hand of regular consumers with the help of the biggest Hologram company called Lightfield. Holography is a technique that enables a wavefront to be recorded and later re-constructed. Holography is best known as a method of generating three-dimensional images, but it also has a wide range of other applications. In principle, it is possible to make a hologram for any type of a light field.
–
TIMESTAMPS:
00:00 No longer just Science Fiction.
00:45 What is a hologram?
02:28 How do these new Holograms work?
05:56 The Future of Entertainment?
08:17 Last Words.
–
#holograms #ai #technology
In this bonus interview for the series Science Uprising, computer scientist and AI expert Selmer Bringsjord provides a wide-ranging discussion of artificial intelligence (AI) and its capabilities. Bringsjord addresses three features humans possess that AI machines won’t be able to duplicate in his view: consciousness, cognition, and genuine creativity.
Selmer Bringsjord is a Professor of Cognitive Science and Computer Science at Rensselaer Polytechnic Institute and Director of the Rensselaer AI and Reasoning Laboratory. He and his colleagues have developed the “Lovelace Test” to evaluate whether machine intelligence has resulted in mind or consciousness.
A team of researchers at the Korea Advanced Institute of Science and Technology, working with a colleague from Institute for Basic Science, both in the Republic of Korea, has found an easy way to create an electronic network inside a polymer. They used an acoustic field to connect liquid metal dots.
In their paper published in the journal Science, the group describes their technique and its possible uses. Ruirui Qiao and Shi-Yang Tang with the University of Queensland in Australia and the University of Birmingham, in the U.K., respectively, have published a Perspectives piece in the same journal outlining the work done by the team.
This describes the essential Musk — with the caveat that it leaves out his autism and emotional fragility. He has the right philosophy and the brains to make serious progress, but with some surprisingly unexpected naïve schoolboy level mistakes and misunderstandings about human nature. Regardless, he will learn from them.
We may have cracked the code.
For the past two centuries, humans have relied on fossil fuels for concentrated energy; hundreds of millions of years of photosynthesis packed into a convenient, energy-dense substance. But that supply is finite, and fossil fuel consumption has tremendous negative impact on Earth’s climate.
“The biggest challenge many people don’t realize is that even nature has no solution for the amount of energy we use,” said University of Chicago chemist Wenbin Lin. Not even photosynthesis is that good, he said: “We will have to do better than nature, and that’s scary.”
One possible option scientists are exploring is “artificial photosynthesis”—reworking a plant’s system to make our own kinds of fuels. However, the chemical equipment in a single leaf is incredibly complex, and not so easy to turn to our own purposes.
A new idea is helping to explain the puzzling success of today’s artificial-intelligence algorithms — and might also explain how human brains learn.
The researchers revealed that deep convolutional neural networks were insensitive to configural object properties.
Deep convolutional neural networks (DCNNs) do not view things in the same way that humans do (through configural shape perception), which might be harmful in real-world AI applications. This is according to Professor James Elder, co-author of a York University study recently published in the journal iScience.
The study, which conducted by Elder, who holds the York Research Chair in Human and Computer Vision and is Co-Director of York’s Centre for AI & Society, and Nicholas Baker, an assistant psychology professor at Loyola College in Chicago and a former VISTA postdoctoral fellow at York, finds that deep learning models fail to capture the configural nature of human shape perception.