Toggle light / dark theme

Could AI communicate with aliens better than we could?

Consider the potential problems. Number one would be that any potential aliens we encounter won’t be speaking a human language. Number two would be the lack of knowledge about the aliens’ culture or sociology — even if we could translate, we might not understand what relevance it has to their cultural touchstones.

Eamonn Kerins, an astrophysicist from the Jodrell Bank Centre for Astrophysics at the University of Manchester in the U.K., thinks that the aliens themselves might recognize these limitations and opt to do some of the heavy lifting for us by making their message as simple as possible.

“One might hope that aliens who want to establish contact might be attempting to make their signal as universally understandable as possible,” said Kerins in a Zoom interview. “Maybe it’s something as basic as a mathematical sequence, and already that conveys the one message that perhaps they hoped to send in the first place, which is that we’re here, you’re not alone.”

Elon Musk Unveils X.ai: A Game-Changer For Investors

The richest man in the world is building a super-intelligent AI to understand the true nature of the universe. This is what the project means for investors.

Elon Musk held a Twitter Spaces event in early July to reveal X.ai, his newest AI business. X.ai researchers will focus on science, while also building applications for enterprises and consumers.

To participate, investors should continue to buy Arista Networks ANET (ANET).

Visual Question Answering with Frozen Large Language Models

In this article we’ll use a Q-Former, a technique for bridging computer vision and natural language models, to create a visual question answering system. We’ll go over the necessary theory, following the BLIP-2 paper, then implement a system which can be used to talk with a large language model about an image.

Who is this useful for? Data scientists interested in computer vision, natural language processing, and multimodal modeling.

How advanced is this post? Intermediate. You might struggle if you don’t have some experience in both computer vision and natural language processing.

Multimodality and Large Multimodal Models (LMMs)

For a long time, each ML model operated in one data mode – text (translation, language modeling), image (object detection, image classification), or audio (speech recognition).

However, natural intelligence is not limited to just a single modality. Humans can read and write text. We can see images and watch videos. We listen to music to relax and watch out for strange noises to detect danger. Being able to work with multimodal data is essential for us or any AI to operate in the real world.

OpenAI noted in their GPT-4V system card that “incorporating additional modalities (such as image inputs) into LLMs is viewed by some as a key frontier in AI research and development.”

New cyber algorithm shuts down malicious robotic attack

Australian researchers have designed an algorithm that can intercept a man-in-the-middle (MitM) cyberattack on an unmanned military robot and shut it down in seconds.

In an experiment using deep learning to simulate the behavior of the human brain, artificial intelligence experts from Charles Sturt University and the University of South Australia (UniSA) trained the robot’s operating system to learn the signature of a MitM eavesdropping cyberattack. This is where attackers interrupt an existing conversation or .

The algorithm, tested in real time on a replica of a United States army combat ground vehicle, was 99% successful in preventing a malicious attack. False positive rates of less than 2% validated the system, demonstrating its effectiveness.

Unmanned and unbothered: Autonomous intelligent oceanic exploration is upon us

The ocean has always been a force to be reckoned with when it comes to understanding and traversing its seemingly limitless blue waters. Past innovations such as deep-sea submersibles and ocean-observing satellites have helped illuminate some wonders of the ocean though many questions still remain.

These questions are closer to being answered thanks to the development of the Intelligent Swift Ocean Observing System (ISOOS). Using this system, targeted regions of the ocean can be mapped in a three-dimensional method allowing for more data to be gathered in a safer, quicker and more efficient method than existing technologies can achieve.

Researchers published their results in Ocean-Land-Atmosphere Research.

AI just got 100-fold more energy efficient

Forget the cloud.

Northwestern University engineers have developed a new nanoelectronic device that can perform accurate machine-learning classification tasks in the most energy-efficient manner yet. Using 100-fold less energy than current technologies, the device can crunch large amounts of data and perform artificial intelligence (AI) tasks in real time without beaming data to the cloud for analysis.

With its tiny footprint, ultra-low power consumption and lack of lag time to receive analyses, the device is ideal for direct incorporation into wearable electronics (like smart watches and fitness trackers) for real-time data processing and near-instant diagnostics.