Toggle light / dark theme

A Deep-learning Search For Technosignatures Of 820 Nearby Stars

The goal of the Search for Extraterrestrial Intelligence (SETI) is to quantify the prevalence of technological life beyond Earth via their “technosignatures”. One theorized technosignature is narrowband Doppler drifting radio signals.

The principal challenge in conducting SETI in the radio domain is developing a generalized technique to reject human radio frequency interference (RFI). Here, we present the most comprehensive deep-learning based technosignature search to date, returning 8 promising ETI signals of interest for re-observation as part of the Breakthrough Listen initiative.

The search comprises 820 unique targets observed with the Robert C. Byrd Green Bank Telescope, totaling over 480, hr of on-sky data. We implement a novel beta-Convolutional Variational Autoencoder to identify technosignature candidates in a semi-unsupervised manner while keeping the false positive rate manageably low. This new approach presents itself as a leading solution in accelerating SETI and other transient research into the age of data-driven astronomy.

Microsoft Researchers Are Using ChatGPT to Control Robots, Drones

ChatGPT is best known as an AI program capable of writing essays and answering questions, but now Microsoft is using the chatbot to control robots.

On Monday, the company’s researchers published (Opens in a new window) a paper on how ChatGPT can streamline the process of programming software commands to control various robots, such as mechanical arms and drones.

“We still rely heavily on hand-written code to control robots,” the researchers wrote. Microsoft’s approach, on the other hand, taps ChatGPT to write some of the computer code.

AI loses to human being at Go after seven years of victories

The man beat the machine by using a flaw uncovered by another computer system.

A human beat a top-ranked AI system in the board game Go, proving that the rise of machines may not be as imminent as previously believed.

This is according to a report by the Financial Times published on Sunday.


Zerbor/iStock.

The player was Kellin Pelrine, an American one level below the top amateur ranking. He achieved this victory by taking advantage of a previously unknown weakness that another computer had identified.

New AI system to help save lives of earthquake survivors in Turkey

An AI system called “xView2” is helping ground rescue efforts in regions of Turkey devastated by this month’s earthquakes.

The U.S. Department of Defense is using a visual computing artificial intelligence system to aid ongoing disaster response efforts in Turkey and Syria following the devastating earthquake on February 6 that has claimed tens of thousands of lives.

AI system helps disaster response teams in Turkey.


Getty Images.

The AI system, called xView2, is still in the early development phase, but it has already been deployed to help ground rescue missions in Turkey.

This autonomous ground robot helps firefighters in enclosed spaces

It’s mini yet mighty.

An autonomous ground robot was developed by researchers at Universidad Rey Juan Carlos and Universidad Autónoma de Madrid. It could help firefighters deal with situations in enclosed spaces.

Undoubtedly, firefighters would profit from the assistance of trustworthy mobile robots in their high-danger duties. Regarding this, researchers led a study called “HelpResponder” in 2021, which aims to reduce accident rates and mission times of intervention teams, as reported by Tech Xplore.


Fernandez Talavera et al.

This method could help firefighters plan interventions more effectively by paving safe access routes to the impacted areas and assisting them during evacuations.

Best Prompt Engineering Tips for Beginners in 2023

What is Prompt Engineering?

Artificial intelligence, particularly natural language processing, has a notion called prompt engineering (NLP). In prompt engineering, the job description is included explicitly in the input, such as a question, instead of being provided implicitly. Typically, prompt engineering involves transforming one or more tasks into a prompt-based dataset and “prompt-based learning”—also known as “prompt learning”—to train a language model. Prompt engineering, also known as “prefix-tuning” or “prompt tuning,” is a method wherein a big, “frozen” pretrained language model is used, and just the prompt’s representation is learned.

Developing the ChatGPT Tool, GPT-2, and GPT-3 language models was crucial for prompt engineering. Multitask prompt engineering in 2021 has shown strong performance on novel tasks utilizing several NLP datasets. Few-shot learning examples prompt with a thought chain provide a stronger representation of language model thinking. Prepaying text to a zero-shot learning prompt that supports a chain of reasoning, such as “Let’s think step by step,” may enhance a language model’s performance in multi-step reasoning tasks. The release of various open-source notebooks and community-led image synthesis efforts helped make these tools widely accessible.

Meet LAMPP: A New AI Approach From MIT To Integrate Background Knowledge From Language Into Decision-Making Problems

Common sense priors are essential to make decisions under uncertainty in real-world settings. Let’s say they want to give the scenario in Fig. 1 some labels. As a few key elements are recognized, it becomes evident that the image shows a restroom. This assists in resolving some of the labels for certain more difficult objects, such as the shower curtain in the scene rather than the window curtain and the mirror instead of the portrait on the wall. In addition to visual tasks, prior knowledge of expected item or event co-occurrences is crucial for navigating new environments and comprehending the actions of other agents. Moreover, such expectations are essential to object categorization and reading comprehension.

Unlike robot demos or segmented pictures, vast text corpora are easily accessible and include practically all aspects of the human experience. Current machine learning models use task-specific datasets to learn about the previous distribution of labels and judgments for the majority of problem domains. When training data is skewed or sparse, this can lead to systematic mistakes, particularly on uncommon or out-of-distribution inputs. How might they provide models with broader, more adaptable past knowledge? They suggest using learned distributions over natural language strings known as language models as task-general probabilistic priors.

LMs have been employed as sources of prior knowledge for tasks ranging from common-sense question answering to modeling scripts and tales to synthesizing probabilistic algorithms in language processing and other text production activities. They frequently give higher diversity and fidelity than small, task-specific datasets for encoding much of this information, such as the fact that plates are found in kitchens and dining rooms and that breaking eggs comes before whisking them. It has also been proposed that such language monitoring contributes to common-sense human knowledge in areas that are challenging to learn from first-hand experience.

A Physical Theory For When the Brain Performs Best

Early critiques pointed out that proving a network was near the critical point required improved statistical tests. The field responded constructively, and this type of objection is rarely heard these days. More recently, some work has shown that what was previously considered a signature of criticality might also be the result of random processes. Researchers are still investigating that possibility, but many of them have already proposed new criteria for distinguishing between the apparent criticality of random noise and the true criticality of collective interactions among neurons.

Meanwhile, over the past 20 years, research in this area has steadily become more visible. The breadth of methods being used to assess it has also grown. The biggest questions now focus on how operating near the critical point affects cognition, and how external inputs can drive a network to move around the critical point. Ideas about criticality have also begun to spread beyond neuroscience. Citing some of the original papers on criticality in living neural networks, engineers have shown that self-organized networks of atomic switches can be made to operate near the critical point so that they compute many functions optimally. The deep learning community has also begun to study whether operating near the critical point improves artificial neural networks.

The critical brain hypothesis may yet prove to be wrong, or incomplete, although current evidence does support it. Either way, the understanding it provides is generating an avalanche of questions and answers that tell us much more about the brain — and computing generally — than we knew before.

/* */