Toggle light / dark theme

What if “looking your age” refers not to your face, but to your chest? Osaka Metropolitan University scientists have developed an advanced artificial intelligence (AI) model that utilizes chest radiographs to accurately estimate a patient’s chronological age. More importantly, when there is a disparity, it can signal a correlation with chronic disease.

These findings mark a leap in , paving the way for improved early disease detection and intervention. The results are published in The Lancet Healthy Longevity.

The research team, led by graduate student Yasuhito Mitsuyama and Dr. Daiju Ueda from the Department of Diagnostic and Interventional Radiology at the Graduate School of Medicine, Osaka Metropolitan University, first constructed a deep learning-based AI model to estimate age from chest radiographs of healthy individuals.

😗😁


One of Google’s AI units is using generative AI to develop at least 21 different tools for life advice, planning and tutoring, The New York Times reported Wednesday.

Google’s DeepMind has become the “nimble, fast-paced” standard-bearer for the company’s AI efforts, as CNBC previously reported, and is behind the development of the tools, the Times reported.

News of the tool’s development comes after Google’s own AI safety experts had reportedly presented a slide deck to executives in December that said users taking life advice from AI tools could experience “diminished health and well-being” and a “loss of agency,” per the Times.

Neuroscientists have worked for decades to decode what people are seeing, hearing or thinking from brain activity alone. In 2012 a team that included the new study’s senior author—cognitive neuroscientist Robert Knight of the University of California, Berkeley—became the first to successfully reconstruct audio recordings of words participants heard while wearing implanted electrodes. Others have since used similar techniques to reproduce recently viewed or imagined pictures from participants’ brain scans, including human faces and landscape photographs. But the recent PLOS Biology paper by Knight and his colleagues is the first to suggest that scientists can eavesdrop on the brain to synthesize music.

“These exciting findings build on previous work to reconstruct plain speech from brain activity,” says Shailee Jain, a neuroscientist at the University of California, San Francisco, who was not involved in the new study. “Now we’re able to really dig into the brain to unearth the sustenance of sound.”

To turn brain activity data into musical sound in the study, the researchers trained an artificial intelligence model to decipher data captured from thousands of electrodes that were attached to the participants as they listened to the Pink Floyd song while undergoing surgery.

The robot’s memory is so large that it can memorise all Jeppesen navigation charts, a task that is impossible for human pilots.

Both artificial intelligence (AI) and robotics have made significant strides in recent years, meaning most human jobs could soon be overtaken by technology — on the ground and even in the skies above us.

A team of engineers and researchers from the Korea Advanced Institute of Science & Technology (KAIST) is currently developing a humanoid robot that can fly aircraft without needing to modify the cockpit.

UC San Diego’s Q-MEEN-C is developing brain-like computers through mimicking neurons and synapses in quantum materials. Recent discoveries in non-local interactions represent a critical step towards more efficient AI hardware that could revolutionize artificial intelligence technology.

We often believe that computers are more efficient than humans. After all, computers can solve complex math equations in an instant and recall names that we might forget. However, human brains can process intricate layers of information rapidly, accurately, and with almost no energy input. Recognizing a face after seeing it only once or distinguishing a mountain from an ocean are examples of such tasks. These seemingly simple human functions require considerable processing and energy from computers, and even then, the results may vary in accuracy.

How close the measured value conforms to the correct value.

Smaller language models can be based on a billion parameters or less—still pretty large, but much smaller than foundational LLMs like ChatGPT and Bard. They are pre-trained to understand vocabulary and human speech, so the incremental cost to customize them using corporate and industry-specific data is vastly lower. There are several options for these pre-trained LLMs that can be customized internally, including AI21 and Reka, as well as open source LLMs like Alpaca and Vicuna.

Smaller language models aren’t just more cost-efficient, they’re often far more accurate, because instead of training them on all publicly available data—the good and the bad—they are trained and optimized on carefully vetted data that addresses the exact use cases a business cares about.

That doesn’t mean they’re limited to internal corporate data. Smaller language models can incorporate third-party data about the economy, commodities pricing, the weather, or whatever data sets are needed, and combine them with their proprietary data sets. These data sources are widely available from data service providers who ensure the information is current, accurate, and clean.

Google today is rolling out a few new updates to its nearly three-month-old Search Generative Experience (SGE), the company’s AI-powered conversational mode in Search, with a goal of helping users better learn and make sense of the information they discover on the web. The features include tools to see definitions of unfamiliar terms, those that help to improve your understanding and coding information across languages, and an interesting feature that lets you tap into the AI power of SGE while you’re browsing.

The company explains that these improvements aim to help people better understand complicated concepts or complex topics, boost their coding skills and more.

One of the new features will let you hover over certain words to preview their definitions and see related images or diagrams related to the topic, which you can then tap on to learn more. This feature will become available across Google’s AI-generated responses to topics or questions related to certain subjects, like STEM, economics, history and others, where you may encounter terms you don’t understand or concepts you want to dive deeper into for a better understanding.