Toggle light / dark theme

The Rise of Artificial Intelligence | Wondrium Perspectives

For almost a century, we’ve been intrigued and sometimes terrified by the big questions of artificial intelligence. Will computers ever become truly intelligent? Will the time come when machines can operate without human intervention? What would happen if a machine developed a conscience?

In this episode of Perspectives, six experts in the fields of robotics, sci-fi, and philosophy discuss breakthroughs in the development of AI that are both good, as well as a bit worrisome.

Clips in this video are from the following series on Wondrium:

Mind-Body Philosophy, presented by Patrick Grim.
https://www.wondrium.com/mind-body-philosophy.

Introduction to Machine Learning, presented by Michael L. Litman.
https://www.wondrium.com/introduction-to-machine-learning.

Redefining Reality, presented by Steven Gimbel.

Mathematical paradoxes demonstrate the limits of AI

Humans are usually pretty good at recognizing when they get things wrong, but artificial intelligence systems are not. According to a new study, AI generally suffers from inherent limitations due to a century-old mathematical paradox.

Like some people, AI systems often have a degree of confidence that far exceeds their actual abilities. And like an overconfident person, many AI systems don’t know when they’re making mistakes. Sometimes it’s even more difficult for an AI system to realize when it’s making a mistake than to produce a correct result.

Researchers from the University of Cambridge and the University of Oslo say that instability is the Achilles’ heel of modern AI and that a mathematical paradox shows AI’s limitations. Neural networks, the state of the art tool in AI, roughly mimic the links between neurons in the brain. The researchers show that there are problems where stable and accurate exist, yet no algorithm can produce such a . Only in specific cases can algorithms compute stable and accurate neural networks.

Scientists tap AI betting agents to help solve research reproducibility concerns

Scientists are increasingly concerned that the lack of reproducibility in research may lead to, among other things, inaccuracies that slow scientific output and diminished public trust in science. Now, a team of researchers reports that creating a prediction market, where artificially intelligent—AI—agents make predictions—or bet—on hypothetical replication studies, could lead to an explainable, scalable approach to estimate confidence in published scholarly work.

Replication of experiments and studies, a critical step in the scientific process, helps provide confidence in the results and indicates whether they can generalize across contexts, according to Sarah Rajtmajer, assistant professor in and technology, Penn State. As experiments have become more complex, costly and time consuming, scientists increasingly lack the resources for robust efforts—what is often referred to now by them as the “replication crisis.”

“As scientists, we want to do work, and we want to know that our work is good,” said Rajtmajer. “Our approach to help address the replication crisis is to use AI to help predict whether a finding would replicate if repeated and why.”

Will your digital twin make you healthier? | Jacqueline Alderson | TEDxPerth

Would you share your data for the common good? Biomechanist Jacqueline Alderson shows how sophisticated simulations based on real data can help prevent disease, illness and injury. Jacqueline Alderson is an Associate Professor of Biomechanics at the University of Western Australia and Adjunct Professor of Human Performance, Innovation and Technology at the Auckland University of Technology. She has always been curious about movement — whether it’s helping surgeons make best practice decisions or helping AFL players avoid knee injuries. She now travels the world to share her knowledge in human movement, wearable tech and artificial intelligence and its role in tracking, analysing and intervening in the human condition. This talk was given at a TEDx event using the TED conference format but independently organized by a local community.

What’s Inside a Black Hole? Physicist Probes Holographic Duality With Quantum Computing To Find Out

Dude, what if everything around us was just … a hologram?

The thing is, it could be—and a University of Michigan physicist is using quantum computing and machine learning to better understand the idea, called holographic duality.

Holographic duality is a mathematical conjecture that connects theories of particles and their interactions with the theory of gravity. This conjecture suggests that the theory of gravity and the theory of particles are mathematically equivalent: what happens mathematically in the theory of gravity happens in the theory of particles, and vice versa.

Ford uses mobile robots to operate 3D printers without human help

Engineers at Ford’s Advanced Manufacturing Center have tasked the innovative robot on wheels – called Javier – with operating the 3D printers completely on its own. The autonomous process enables the 3D printer to run continuously with no human interaction needed, increasing throughput and reducing the cost of custom-printed products.

Ford says Javier is always on time, very precise in its movements, mostly spends its day doing nothing but 3D printing, only taking a “short break” to recharge the batteries. The company has achieved great accuracy with Javier, using its feedback to significantly reduce margins of error. The method can also be applied to a vast array of robots already working at the company to increase efficiency and reduce cost.

Ford has filed several patents for the technology in its drive to innovate. Javier can communicate with Ford’s 3D printer, something that isn’t necessarily as easy to pull off as it sounds. The robot does not require the use of a camera vision system to “see.”

/* */