Toggle light / dark theme

Microsoft Researchers Are Using ChatGPT to Control Robots, Drones

ChatGPT is best known as an AI program capable of writing essays and answering questions, but now Microsoft is using the chatbot to control robots.

On Monday, the company’s researchers published (Opens in a new window) a paper on how ChatGPT can streamline the process of programming software commands to control various robots, such as mechanical arms and drones.

“We still rely heavily on hand-written code to control robots,” the researchers wrote. Microsoft’s approach, on the other hand, taps ChatGPT to write some of the computer code.

AI loses to human being at Go after seven years of victories

The man beat the machine by using a flaw uncovered by another computer system.

A human beat a top-ranked AI system in the board game Go, proving that the rise of machines may not be as imminent as previously believed.

This is according to a report by the Financial Times published on Sunday.


Zerbor/iStock.

The player was Kellin Pelrine, an American one level below the top amateur ranking. He achieved this victory by taking advantage of a previously unknown weakness that another computer had identified.

New AI system to help save lives of earthquake survivors in Turkey

An AI system called “xView2” is helping ground rescue efforts in regions of Turkey devastated by this month’s earthquakes.

The U.S. Department of Defense is using a visual computing artificial intelligence system to aid ongoing disaster response efforts in Turkey and Syria following the devastating earthquake on February 6 that has claimed tens of thousands of lives.

AI system helps disaster response teams in Turkey.


Getty Images.

The AI system, called xView2, is still in the early development phase, but it has already been deployed to help ground rescue missions in Turkey.

This autonomous ground robot helps firefighters in enclosed spaces

It’s mini yet mighty.

An autonomous ground robot was developed by researchers at Universidad Rey Juan Carlos and Universidad Autónoma de Madrid. It could help firefighters deal with situations in enclosed spaces.

Undoubtedly, firefighters would profit from the assistance of trustworthy mobile robots in their high-danger duties. Regarding this, researchers led a study called “HelpResponder” in 2021, which aims to reduce accident rates and mission times of intervention teams, as reported by Tech Xplore.


Fernandez Talavera et al.

This method could help firefighters plan interventions more effectively by paving safe access routes to the impacted areas and assisting them during evacuations.

Best Prompt Engineering Tips for Beginners in 2023

What is Prompt Engineering?

Artificial intelligence, particularly natural language processing, has a notion called prompt engineering (NLP). In prompt engineering, the job description is included explicitly in the input, such as a question, instead of being provided implicitly. Typically, prompt engineering involves transforming one or more tasks into a prompt-based dataset and “prompt-based learning”—also known as “prompt learning”—to train a language model. Prompt engineering, also known as “prefix-tuning” or “prompt tuning,” is a method wherein a big, “frozen” pretrained language model is used, and just the prompt’s representation is learned.

Developing the ChatGPT Tool, GPT-2, and GPT-3 language models was crucial for prompt engineering. Multitask prompt engineering in 2021 has shown strong performance on novel tasks utilizing several NLP datasets. Few-shot learning examples prompt with a thought chain provide a stronger representation of language model thinking. Prepaying text to a zero-shot learning prompt that supports a chain of reasoning, such as “Let’s think step by step,” may enhance a language model’s performance in multi-step reasoning tasks. The release of various open-source notebooks and community-led image synthesis efforts helped make these tools widely accessible.

Meet LAMPP: A New AI Approach From MIT To Integrate Background Knowledge From Language Into Decision-Making Problems

Common sense priors are essential to make decisions under uncertainty in real-world settings. Let’s say they want to give the scenario in Fig. 1 some labels. As a few key elements are recognized, it becomes evident that the image shows a restroom. This assists in resolving some of the labels for certain more difficult objects, such as the shower curtain in the scene rather than the window curtain and the mirror instead of the portrait on the wall. In addition to visual tasks, prior knowledge of expected item or event co-occurrences is crucial for navigating new environments and comprehending the actions of other agents. Moreover, such expectations are essential to object categorization and reading comprehension.

Unlike robot demos or segmented pictures, vast text corpora are easily accessible and include practically all aspects of the human experience. Current machine learning models use task-specific datasets to learn about the previous distribution of labels and judgments for the majority of problem domains. When training data is skewed or sparse, this can lead to systematic mistakes, particularly on uncommon or out-of-distribution inputs. How might they provide models with broader, more adaptable past knowledge? They suggest using learned distributions over natural language strings known as language models as task-general probabilistic priors.

LMs have been employed as sources of prior knowledge for tasks ranging from common-sense question answering to modeling scripts and tales to synthesizing probabilistic algorithms in language processing and other text production activities. They frequently give higher diversity and fidelity than small, task-specific datasets for encoding much of this information, such as the fact that plates are found in kitchens and dining rooms and that breaking eggs comes before whisking them. It has also been proposed that such language monitoring contributes to common-sense human knowledge in areas that are challenging to learn from first-hand experience.

A Physical Theory For When the Brain Performs Best

Early critiques pointed out that proving a network was near the critical point required improved statistical tests. The field responded constructively, and this type of objection is rarely heard these days. More recently, some work has shown that what was previously considered a signature of criticality might also be the result of random processes. Researchers are still investigating that possibility, but many of them have already proposed new criteria for distinguishing between the apparent criticality of random noise and the true criticality of collective interactions among neurons.

Meanwhile, over the past 20 years, research in this area has steadily become more visible. The breadth of methods being used to assess it has also grown. The biggest questions now focus on how operating near the critical point affects cognition, and how external inputs can drive a network to move around the critical point. Ideas about criticality have also begun to spread beyond neuroscience. Citing some of the original papers on criticality in living neural networks, engineers have shown that self-organized networks of atomic switches can be made to operate near the critical point so that they compute many functions optimally. The deep learning community has also begun to study whether operating near the critical point improves artificial neural networks.

The critical brain hypothesis may yet prove to be wrong, or incomplete, although current evidence does support it. Either way, the understanding it provides is generating an avalanche of questions and answers that tell us much more about the brain — and computing generally — than we knew before.

Neural Network Models of Mathematical Cognition | Silvester Sabathiel | Numerosity Workshop 2021

Session kindly contributed by Silvester Sabathiel in SEMF’s 2021 Numerous Numerosity Workshop: https://semf.org.es/numerosity/

ABSTRACT
With the rise and advances in the field of artificial intelligence, opportunities to understand the finer-grained mechanisms involved in mathematical cognition have increased. A vast scope of related research has been conducted on machine learning systems that learn solving differential equations, algebraic equations and integrals or proofing complex theorems, all for which the preprocessed symbolic representations form the input and output types. However on the search for cognitive mechanisms that match the scope of humans when it comes to generalizability and applicability of mathematical concepts in the external world, a more grounded approach might be required. This involves starting with fundamental mathematical concepts that are earliest acquired in the human development and learning these within an interactive and multimodal environment. In this talk we are going to examine how artificial neural network systems within such a framework provide a controlled setup to discover possible cognitive mechanisms for intuitive numerosity perception or culturally acquired numerical concepts, such as counting. First we review impactful research results from the past, before I present the contributions of the work myself was involved in. Finally we can discuss the upcoming challenges for the field of numerical cognition and where this research journey could evolve to.

SILVESTER SABATHIEL
NTNU Trondheim.
Personal website: https://silsab.com/
NTNU profile: https://www.ntnu.edu/employees/silvester.sabathiel.
ResearchGate: https://www.researchgate.net/profile/Silvester-Sabathiel-3
LinkedIn: https://www.linkedin.com/in/silvester-sabathiel-03368b117

SEMF NETWORKS
Website: https://semf.org.es.
Twitter: https://twitter.com/semf_nexus.
LinkedIn: https://www.linkedin.com/company/semf-nexus.
Instagram: https://www.instagram.com/semf.nexus.
Facebook: https://www.facebook.com/semf.nexus

/* */