Toggle light / dark theme

Nice.


Anthropic’s Claude 3 Opus has knocked OpenAI’s GPT-4 off the top of the chatbot leaderboard for the first time.

According to the Chatbot Arena Leaderboard, Anthropic’s Claude 3 Opus has taken the top spot from OpenAI’s GPT-4 for the first time. Claude 3 Opus now ranks first based on how real people rate chatbot skills. GPT-4 has been pushed down to second place.

The Chatbot Arena is a benchmark platform created by the Large Model System Organization (LMSYS) to compare the performance of large language models. The Arena pits different models against each other in secret, randomized battles. Users rate the models and vote for the answer they like best. This makes the rankings very useful because they are based on what users prefer.

A recent study published in the journal PNAS Nexus sheds light on the impact of text-to-image generative AI tools like Midjourney, Stable Diffusion, and DALL-E on the artistic process. The researchers discovered that these AI systems tend to enhance artists’ productivity and lead to more favorable evaluations of their work by peers.

Interestingly, while the average novelty in AI-assisted artwork content declined over time, peak content novelty increased. The findings point to the possibility of “generative synesthesia,” a blending of human creativity and AI capabilities to unlock heightened levels of artistic expression.

The advent of generative AI has sparked a heated debate within artistic communities. While some view these technologies as a threat to the intrinsic human ability to create, others recognize their potential to augment human creativity. The research aimed to unravel whether AI adoption enables artists to produce more creative content and under what conditions it leads to more valuable artistic creations.

Visit https://l.linqto.com/anastasiintech and use my promo code ANASTASI500 during checkout to save $500 on your first investment with Linqto.

LinkedIn ➜ / anastasiintech.
Support me at Patreon ➜ / anastasiintech.
Sign up for my Deep In Tech Newsletter for free! ➜ https://anastasiintech.substack.com.

Timestamps:
00:00 — Intro.
00:33 — New Blackwell GPU Explained.
06:35 — Demand for AI Chips.
07:01 — New 4 trillion transistors Chip.
10:04 — Why do we need Huge Chips?
14:04 — New Analog Chip Explained

Summary: Neural networks, regardless of their complexity or training method, follow a surprisingly uniform path from ignorance to expertise in image classification tasks. Researchers found that neural networks classify images by identifying the same low-dimensional features, such as ears or eyes, debunking the assumption that network learning methods are vastly different.

This finding could pave the way for developing more efficient AI training algorithms, potentially reducing the significant computational resources currently required. The research, grounded in information geometry, hints at a more streamlined future for AI development, where understanding the common learning path of neural networks could lead to cheaper and faster training methods.

Some teens with autism use a different set of eye-movement patterns than their peers without autism while recognizing faces, say researchers at Yale Child Study Center (YCSC).

In a new data analysis, YCSC researchers including James McPartland, PhD, and Jason Griffin, PhD, found teens with autism…


Eye movements are part of the process of telling people apart and could provide information to clinicians about how people with autism process social information differently from non-autistic persons.

This article includes computer-generated images that map internet communities by topic, without specifically naming each one. The research was funded by the US government, which is anticipating massive interference in the 2024 elections by “bad actors” using relatively simple AI chat-bots.


In an era of super-accelerated technological advancement, the specter of malevolent artificial intelligence (AI) looms large. While AI holds promise for transforming industries and enhancing human life, the potential for abuse poses significant societal risks. Threats include avalanches of misinformation, deepfake videos, voice mimicry, sophisticated phishing scams, inflammatory ethnic and religious rhetoric, and autonomous weapons that make life-and-death decisions without human intervention.

During this election year in the United States, some are worried that bad actor AI will sway the outcomes of hotly contested races. We spoke with Neil Johnson, a professor of physics at George Washington University, about his research that maps out where AI threats originate and how to help keep ourselves safe.

Artificial intelligence (AI) is bringing a new era to healthcare. A large part of its value is the ability to collect and analyze data sets to streamline administrative processes, improve diagnosis accuracy, and optimize treatment regimens.

Now researchers have added antibiotic discovery to that list.

A recent study published in Nature Machine Intelligence by McMaster University and Stanford University researchers introduces SyntheMol, a generative AI model capable of designing new antibiotics to combat drug-resistant bacteria.