Toggle light / dark theme

An international team of scientists, including from the University of Cambridge, have launched a new research collaboration that will leverage the same technology behind ChatGPT to build an AI-powered tool for scientific discovery.

While ChatGPT deals in words and sentences, the team’s AI will learn from numerical data and physics simulations from across scientific fields to aid scientists in modeling everything from supergiant stars to the Earth’s climate.

The team launched the initiative, called Polymathic AI earlier this week, alongside the publication of a series of related papers on the arXiv open access repository.

Researchers at the Department of Energy’s SLAC National Accelerator Laboratory have demonstrated a new approach to peer deeper into the complex behavior of materials. The team harnessed the power of machine learning to interpret coherent excitations, collective swinging of atomic spins within a system.

This groundbreaking research, published recently in Nature Communications, could make experiments more efficient, providing real-time guidance to researchers during , and is part of a project led by Howard University including researchers at SLAC and Northeastern University to use machine learning to accelerate research in materials.

The team created this new data-driven tool using “neural implicit representations,” a machine learning development used in computer vision and across different scientific fields such as medical imaging, particle physics and cryo-electron microscopy. This tool can swiftly and accurately derive unknown parameters from , automating a procedure that, until now, required significant human intervention.

Large Language Models (LLMs) such as ChatGPT and Bard have taken the world by storm this year, with companies investing millions to develop these AI tools, and some leading AI chatbots being valued in the billions.

These LLMs, which are increasingly used within AI chatbots, scrape the entire Internet of information to learn and to inform answers that they provide to user-specified requests, known as “prompts.”

However, computer scientists from the AI security start-up Mindgard and Lancaster University in the UK have demonstrated that chunks of these LLMs can be copied in less than a week for as little as $50, and the information gained can be used to launch targeted attacks.

Beware of the new Xenomorph Android banking trojan variant.

Its Automatic Transfer System can initiate transactions, access balances, and even transfer funds – all without your knowledge.

Read:


A new variant of the Xenomorph Banking Trojan has been uncovered, targeting 35+ U.S. financial institutions.

A fully automated process, including a brand-new artificial intelligence (AI) tool, has successfully detected, identified and classified its first supernova.

Developed by an led by Northwestern University, the new system automates the entire search for new supernovae across the night sky—effectively removing humans from the process. Not only does this rapidly accelerate the process of analyzing and classifying new supernova candidates, it also bypasses .

The team alerted the astronomical community to the launch and success of the new tool, called the Bright Transient Survey Bot (BTSbot), this week. In the past six years, humans have spent an estimated total of 2,200 hours visually inspecting and classifying supernova candidates. With the new tool now officially online, researchers can redirect this precious time toward other responsibilities in order to accelerate the pace of discovery.

Consider the potential problems. Number one would be that any potential aliens we encounter won’t be speaking a human language. Number two would be the lack of knowledge about the aliens’ culture or sociology — even if we could translate, we might not understand what relevance it has to their cultural touchstones.

Eamonn Kerins, an astrophysicist from the Jodrell Bank Centre for Astrophysics at the University of Manchester in the U.K., thinks that the aliens themselves might recognize these limitations and opt to do some of the heavy lifting for us by making their message as simple as possible.

“One might hope that aliens who want to establish contact might be attempting to make their signal as universally understandable as possible,” said Kerins in a Zoom interview. “Maybe it’s something as basic as a mathematical sequence, and already that conveys the one message that perhaps they hoped to send in the first place, which is that we’re here, you’re not alone.”

The richest man in the world is building a super-intelligent AI to understand the true nature of the universe. This is what the project means for investors.

Elon Musk held a Twitter Spaces event in early July to reveal X.ai, his newest AI business. X.ai researchers will focus on science, while also building applications for enterprises and consumers.

To participate, investors should continue to buy Arista Networks ANET (ANET).

In this article we’ll use a Q-Former, a technique for bridging computer vision and natural language models, to create a visual question answering system. We’ll go over the necessary theory, following the BLIP-2 paper, then implement a system which can be used to talk with a large language model about an image.

Who is this useful for? Data scientists interested in computer vision, natural language processing, and multimodal modeling.

How advanced is this post? Intermediate. You might struggle if you don’t have some experience in both computer vision and natural language processing.