Toggle light / dark theme

First supernova detected, confirmed, classified and shared by AI

A fully automated process, including a brand-new artificial intelligence (AI) tool, has successfully detected, identified and classified its first supernova.

Developed by an led by Northwestern University, the new system automates the entire search for new supernovae across the night sky—effectively removing humans from the process. Not only does this rapidly accelerate the process of analyzing and classifying new supernova candidates, it also bypasses .

The team alerted the astronomical community to the launch and success of the new tool, called the Bright Transient Survey Bot (BTSbot), this week. In the past six years, humans have spent an estimated total of 2,200 hours visually inspecting and classifying supernova candidates. With the new tool now officially online, researchers can redirect this precious time toward other responsibilities in order to accelerate the pace of discovery.

Could AI communicate with aliens better than we could?

Consider the potential problems. Number one would be that any potential aliens we encounter won’t be speaking a human language. Number two would be the lack of knowledge about the aliens’ culture or sociology — even if we could translate, we might not understand what relevance it has to their cultural touchstones.

Eamonn Kerins, an astrophysicist from the Jodrell Bank Centre for Astrophysics at the University of Manchester in the U.K., thinks that the aliens themselves might recognize these limitations and opt to do some of the heavy lifting for us by making their message as simple as possible.

“One might hope that aliens who want to establish contact might be attempting to make their signal as universally understandable as possible,” said Kerins in a Zoom interview. “Maybe it’s something as basic as a mathematical sequence, and already that conveys the one message that perhaps they hoped to send in the first place, which is that we’re here, you’re not alone.”

Elon Musk Unveils X.ai: A Game-Changer For Investors

The richest man in the world is building a super-intelligent AI to understand the true nature of the universe. This is what the project means for investors.

Elon Musk held a Twitter Spaces event in early July to reveal X.ai, his newest AI business. X.ai researchers will focus on science, while also building applications for enterprises and consumers.

To participate, investors should continue to buy Arista Networks ANET (ANET).

Visual Question Answering with Frozen Large Language Models

In this article we’ll use a Q-Former, a technique for bridging computer vision and natural language models, to create a visual question answering system. We’ll go over the necessary theory, following the BLIP-2 paper, then implement a system which can be used to talk with a large language model about an image.

Who is this useful for? Data scientists interested in computer vision, natural language processing, and multimodal modeling.

How advanced is this post? Intermediate. You might struggle if you don’t have some experience in both computer vision and natural language processing.

Multimodality and Large Multimodal Models (LMMs)

For a long time, each ML model operated in one data mode – text (translation, language modeling), image (object detection, image classification), or audio (speech recognition).

However, natural intelligence is not limited to just a single modality. Humans can read and write text. We can see images and watch videos. We listen to music to relax and watch out for strange noises to detect danger. Being able to work with multimodal data is essential for us or any AI to operate in the real world.

OpenAI noted in their GPT-4V system card that “incorporating additional modalities (such as image inputs) into LLMs is viewed by some as a key frontier in AI research and development.”

New cyber algorithm shuts down malicious robotic attack

Australian researchers have designed an algorithm that can intercept a man-in-the-middle (MitM) cyberattack on an unmanned military robot and shut it down in seconds.

In an experiment using deep learning to simulate the behavior of the human brain, artificial intelligence experts from Charles Sturt University and the University of South Australia (UniSA) trained the robot’s operating system to learn the signature of a MitM eavesdropping cyberattack. This is where attackers interrupt an existing conversation or .

The algorithm, tested in real time on a replica of a United States army combat ground vehicle, was 99% successful in preventing a malicious attack. False positive rates of less than 2% validated the system, demonstrating its effectiveness.

/* */