Toggle light / dark theme

We’re just at the beginning of an AI revolution that will redefine how we live and work. In particular, deep neural networks (DNNs) have revolutionized the field of AI and are increasingly gaining prominence with the advent of foundation models and generative AI. But running these models on traditional digital computing architectures limits their achievable performance and energy efficiency. There has been progress in developing hardware specifically for AI inference, but many of these architectures physically split the memory and processing units. This means the AI models are typically stored in a discrete memory location, and computational tasks require constantly shuffling data between the memory and processing units. This process slows down computation and limits the maximum achievable energy efficiency.

Dunno if anyone has already posted this.


The chip showcases critical building blocks of a scalable mixed-signal architecture.

The researchers also found a spot in the brain’s temporal lobe that reacted when volunteers heard the 16th notes of the song’s guitar groove. They proposed that this particular area might be involved in our perception of rhythm.

The findings offer a first step toward creating more expressive devices to assist people who can’t speak. Over the past few years, scientists have made major breakthroughs in extracting words from the electrical signals produced by the brains of people with muscle paralysis when they attempt to speak.

But a significant amount of the information conveyed through speech comes from what linguists call “prosodic” elements, like tone — “the things that make us a lively speaker and not a robot,” Dr. Schalk said.

Venture capitalist a16z rebuilds a research paper with “AI Town” and releases the code. AI Town uses a language model to simulate a Sims-like virtual world in which all characters can flexibly pursue their motives and make decisions based on prompts.

In April, a team of researchers from Google and Stanford published the research paper Smallville. OpenAI’s GPT-3.5 simulates AI agents in a small digital town based solely on prompts.

Each character has an occupation, personality, and relationships with other characters, which are specified in an initial description. With further prompts, the AI agents begin to observe, plan, and make decisions.

Nvidia (NVDA) will report its second quarter earnings after the closing bell next Wednesday, setting up what will be the AI hype cycle’s biggest test yet. During this AI gold rush, companies around the world looking to profit have turned to Nvidia’s graphics processors to power new AI software and platforms.

Currently, tech firms of all sizes are doing everything they can to get their hands on Nvidia chips. During Tesla’s (TSLA) Q2 earnings call, CEO Elon Musk told analysts that the automaker will take as many Nvidia graphics processors as the company can produce.


Nvidia is widely expected to have a blowout earnings report. A miss could derail the AI hype train.

Called “Pibot,” this humanoid robot integrates large language models to help it fly any aircraft as well as, if not better than, a human pilot.

Researchers at the Korea Advanced Institute of Science & Technology (KAIST) are working to develop a humanoid pilot that can fly an aircraft without modifying the cockpit. Called “Pibot,” the robot has articulated arms and fingers that can interact with flight controls with great precision and dexterity. It also comes with camera “eyes” that help the robot monitor the internal and external conditions of the aircraft while in control.


Korea Herald.

Real robot-pilot.

The Japanese have created an uncanny AI model that can estimate your true age from the looks of your chest X-ray. It can help doctors in the early detection of chronic disorders.

Have you ever wondered why some people look much older than their chronological age? A new study from Japan’s Osaka Metropolitan University (OMU) suggests this could be a sign of a disease they don’t know yet.

The study authors have developed an AI program that can accurately calculate an individual’s age by reading their chest X-ray. This model estimates age, unlike various previously reported AI programs that examine radiographs to detect lung anomalies. Then researchers use this information to predict body ailments further.

This post is also available in: he עברית (Hebrew)

Many artificial intelligence tools use public data to train their large language models, but now large social media sites are looking for ways to defend against data scraping. The problem is that scraping isn’t currently illegal.

According to Cybernews, data scraping refers to a computer program extracting data from the output generated from another program, and it is becoming a big problem for large social media sites like Twitter or Reddit.

What if “looking your age” refers not to your face, but to your chest? Osaka Metropolitan University scientists have developed an advanced artificial intelligence (AI) model that utilizes chest radiographs to accurately estimate a patient’s chronological age. More importantly, when there is a disparity, it can signal a correlation with chronic disease.

These findings mark a leap in , paving the way for improved early disease detection and intervention. The results are published in The Lancet Healthy Longevity.

The research team, led by graduate student Yasuhito Mitsuyama and Dr. Daiju Ueda from the Department of Diagnostic and Interventional Radiology at the Graduate School of Medicine, Osaka Metropolitan University, first constructed a deep learning-based AI model to estimate age from chest radiographs of healthy individuals.