Category: robotics/AI – Page 1,455

Making Mind Reading Possible: Invention Allows Amputees To Control a Robotic Arm With Their Mind
A University of Minnesota research team has made mind-reading possible through the use of electronics and AI.
Researchers at the University of Minnesota Twin Cities have created a system that enables amputees to operate a robotic arm using their brain impulses rather than their muscles. This new technology is more precise and less intrusive than previous methods.
The majority of commercial prosthetic limbs now on the market are controlled by the shoulders or chest using a wire and harness system. More sophisticated models employ sensors to detect small muscle movements in the patient’s natural limb above the prosthetic. Both options, however, can be difficult for amputees to learn how to use and are sometimes unhelpful.

5 Predictions from Old Sci-Fi Movies About the 21st Century That Actually Came True
Science and technology have advanced incredibly in the 21st Century. It’s easier now than ever to travel to or talk to people who live halfway across the world, and we now are more connected to advanced technology than anyone could have thought possible. Science fiction, in the 20th and 21st Centuries, has strived to anticipate just how far this technological advancement would go, and what the consequences of that would be.
Of course, a lot of old sci-fi movies included tropes about the 21st Century that proved to be wrong. Indeed, it was probably too optimistic, in hindsight, to assume we would get flying cars before the end of the 90s or that the 2000s would have lifelike androids running around. Despite these incorrect predictions, though, there are some movies that were eerily accurate, or even predicted we would have technology later than we eventually got access to. In some cases, sci-fi has even been the inspiration for invention, with people wanting to emulate what they saw on television. These are some predictions, made by older sci-fi movies, that turned out to be on the money.

Google engineer says Christianity helped him understand AI is ‘sentient’
A Google engineer who was suspended after he said the company’s artificial intelligence chatbot had became sentient says he based the claim on his Christian faith.
Blake Lemoine, 41, was placed on paid leave by Google earlier in June after he published excerpts of a conversation with the company’s LaMDA chatbot that he claimed showed the AI tool had become sentient.
Now, Lemoine says that his claims about LaMDA come from his experience as a “Christian priest” — and is accusing Google of religious discrimination.

Top 10 Experiments with GPT-3 Every Tech Enthusiast Should Try
GPT-3 is a neural network machine learning model trained using internet data to generate any type of text. Developed by OpenAI, it applies machine learning to generate various types of content, including stories, code, legal documents, and even translations based on just a few input words. GPT-3 has been getting a lot of attention for the seemingly unlimited range of possibilities it offers. GPT-3 is also being used for automated conversational tasks, responding to any text. So here mentioned the 10 experiments with GPT-3.
Interviewing AI: Using the Chat preset within GPT-3 Playground you can ask the current entity about its personality. And while of your dialog, the GPT-3’s personality emerges. Note that after 2048 tokens there’s a hard cut, and you never will encounter the same personality setting again. It imitates a human person worrying about data privacy.
Doctor’s Assistant: The AI has been fed with patient files, describing their profile and symptoms in a few lines. The AI spontaneously makes suggestions of what the disease could be. GPT-3 got away with an impressive 8 out of 10 correct guesses. This could become amazing support to doctors, and a great tool to investigate.

A model for the automatic extraction of content from webs and apps
Content management systems or CMSs are the most popular tool for creating content on the internet. In recent years, they have evolved to become the backbone of an increasingly complex ecosystem of websites, mobile apps and platforms. In order to simplify processes, a team of researchers from the Internet Interdisciplinary Institute (IN3) at the Universitat Oberta de Catalunya (UOC) has developed an open-source model to automate the extraction of content from CMSs. Their associated research is published in Research Challenges in Information Science.
The open-source model is a fully functional scientific prototype that makes it possible to extract the data structure and libraries of each CMS and create a piece of software that acts as an intermediary between the content and the so-called front-end (the final application used by the user). This entire process is done automatically, making it an error-free and scalable solution, since it can be repeated multiple times without increasing its cost.


I, Chatbot: The perception of consciousness in conversational AI
So how can LaMDA provide responses that might be perceived by a human user as conscious thought or introspection? Ironically, this is due to the corpus of training data used to train LaMDA and the associativity between potential human questions and possible machine responses. It all boils down to probabilities. The question is how those probabilities evolve such that a rational human interrogator can be confused as to the functionality of the machine?
This brings us to the need for improved “explainability” in AI. Complex artificial neural networks, the basis for a variety of useful AI systems, are capable of computing functions that are beyond the capabilities of a human being. In many cases, the neural network incorporates learning functions that enable adaptation to tasks outside the initial application for which the network was developed. However, the reasons why a neural network provides a specific output in response to a given input are often unclear, even indiscernible, leading to criticism of human dependence upon machines whose intrinsic logic is not properly understood. The size and scope of training data also introduce bias to the complex AI systems, yielding unexpected, erroneous, or confusing outputs to real-world input data. This has come to be referred to as the “black box” problem where a human user, or the AI developer, cannot determine why the AI system behaves as it does.
The case of LaMDA’s perceived consciousness appears no different from the case of Tay’s learned racism. Without sufficient scrutiny and understanding of how AI systems are trained, and without sufficient knowledge of why AI systems generate their outputs from the provided input data, it is possible for even an expert user to be uncertain as to why a machine responds as it does. Unless the need for an explanation of AI behavior is embedded throughout the design, development, testing, and deployment of the systems we will depend upon tomorrow, we will continue to be deceived by our inventions, like the blind interrogator in Turing’s game of deception.

Teaching Physics to AI Can Allow It To Make New Discoveries All on Its Own
Incorporating established physics into neural network algorithms helps them to uncover new insights into material properties
According to researchers at Duke University, incorporating known physics into machine learning algorithms can help the enigmatic black boxes attain new levels of transparency and insight into the characteristics of materials.
Researchers used a sophisticated machine learning algorithm in one of the first efforts of its type to identify the characteristics of a class of engineered materials known as metamaterials and to predict how they interact with electromagnetic fields.