Toggle light / dark theme

Artificial Intelligence Predicts Eye Movements

Summary: A newly developed AI algorithm can directly predict eye position and movement during an MRI scan. The technology could provide new diagnostics for neurological disorders that manifest in changes in eye-movement patterns.

Source: Max Planck Institute.

A large amount of information constantly flows into our brain via the eyes. Scientists can measure the resulting brain activity using magnetic resonance imaging (MRI). The precise measurement of eye movements during an MRI scan can tell scientists a great deal about our thoughts, memories and current goals, but also about diseases of the brain.

AI behind Deepfakes may Power Materials Design Innovations

The person staring back from the computer screen may not actually exist, thanks to artificial intelligence (AI) capable of generating convincing but ultimately fake images of human faces. Now this same technology may power the next wave of innovations in materials design, according to Penn State scientists.

“We hear a lot about deepfakes in the news today – AI that can generate realistic images of human faces that don’t correspond to real people,” said Wesley Reinhart, assistant professor of materials science and engineering and Institute for Computational and Data Sciences faculty co-hire, at Penn State. “That’s exactly the same technology we used in our research. We’re basically just swapping out this example of images of human faces for elemental compositions of high-performance alloys.”

The scientists trained a generative adversarial network (GAN) to create novel refractory high-entropy alloys, materials that can withstand ultra-high temperatures while maintaining their strength and that are used in technology from turbine blades to rockets.

Industrial computer vision is getting ready for growth

Landing AI, a California-based startup led by Google Brain co-founder Andrew Ng, has just nabbed $57 million in series A for its computer vision platform.

Landing AI’s flagship product, the LandingLens, doesn’t have the highlights you see at Google I/O or the Apple Event, where tech giants introduce how the latest advances in AI are making your personal devices smarter and useful. But its impact could be no less significant than the kind of artificial intelligence technology that is finding its way into consumer products and services.

Landing AI is one of several companies that is bringing computer vision to the industrial sector. As industrial computer vision platforms mature, they can bring great productivity, cost-efficiency, and safety to different domains.

North American companies rush to add robots as demand surges

Nov 11 (Reuters) — Companies in North America added a record number of robots in the first nine months of this year as they rushed to speed up assembly lines and struggled to add human workers.

Factories and other industrial users ordered 29,000 robots, 37% more than during the same period last year, valued at $1.48 billion, according to data compiled by the industry group the Association for Advancing Automation. That surpassed the previous peak set in the same time period in 2017, before the global pandemic upended economies.

The rush to add robots is part of a larger upswing in investment as companies seek to keep up with strong demand, which in some cases has contributed to shortages of key goods. At the same time, many firms have struggled to lure back workers displaced by the pandemic and view robots as an alternative to adding human muscle on their assembly lines.

Stanford Uses AI To Make Holographic Displays Look Even More Like Real Life

Virtual and augmented reality headsets are designed to place wearers directly into other environments, worlds, and experiences. While the technology is already popular among consumers for its immersive quality, there could be a future where the holographic displays look even more like real life. In their own pursuit of these better displays, the Stanford Computational Imaging Lab has combined their expertise in optics and artificial intelligence. Their most recent advances in this area are detailed in a paper published today (November 12, 2021) in Science Advances and work that will be presented at SIGGRAPH ASIA 2021 in December.

At its core, this research confronts the fact that current augmented and virtual reality displays only show 2D images to each of the viewer’s eyes, instead of 3D – or holographic – images like we see in the real world.

“They are not perceptually realistic,” explained Gordon Wetzstein, associate professor of electrical engineering and leader of the Stanford Computational Imaging Lab. Wetzstein and his colleagues are working to come up with solutions to bridge this gap between simulation and reality while creating displays that are more visually appealing and easier on the eyes.

Is Elon Musk’s NEURALINK ALREADY OBSOLETE? | Future of Brain Computer Interfaces

Elon Musk’s revolutionary company Neuralink plans to insert Computer Chips into peoples brains but what if there’s a safer and even more performant way of merging humans and machines in the future?
Enter DARPAs plan to help the emergence of non-invasive brain computer interfaces which led to the organization Battelle to create a kind of Neural Dust to interface with our brains that might be the first step to having Nanobots inside of the human body in the future.

How will Neuralink deal with that potential rival with this cutting edge technology? Its possibilities in Fulldive Virtual Reality Games, Medical Applications, merging humans with artificial intelligence and its potential to scale all around the world are enormous.

If you enjoyed this video, please consider rating this video and subscribing to our channel for more frequent uploads. Thank you! smile

#neuralink #ai #elonmusk.

Credits:

https://www.youtube.com/watch?v=PhzDIABahyc.
https://www.bensound.com/

/* */