Toggle light / dark theme

Alice Parker is working on how to mimic disorders such as schizophrenia.

People have expected great things from Alice Parker, who was raised in a family of distinguished scientists and engineers. And Parker, emerita professor of electrical and computer engineering at the University of Southern California has delivered. She helped develop high-level (behavioral) synthesis, an automated computer design process that assists with the transformation of a behavioral description of hardware into a model of its logic and memory circuits.

Her father, a chemist, was on the team that first synthesized vitamin B1 at pharmaceutical company Merck in New Jersey. In 1941 her uncle Edward Wenk Jr., was… More.

Chief scientist Bill Dally explains the 4 ingredients that brought Nvidia so far.

Nvidia is riding high at the moment. The company has managed to increase the performance of its chips on AI tasks a thousandfold over the past 10 years, it’s raking in money, and it’s reportedly very hard to get your hands on its newest AI-accelerating GPU, the H100.

How did Nvidia get here? The company’s chief scientist, Bill Dally, managed to sum it all up in a single slide during his keynote address to the IEEE’s Hot Chips 2023 symposium in Silicon Valley on high-performance microprocessors last week. Moore’s Law was a surprisingly small part of Nvidia’s magic and new number formats a very large part. Put it… More.

But don’t think about replacing your doctor with a chatbot now, or ever.

Could ChatGPT someday assist doctors in diagnosing patients? It might one day be possible. In a recent study, researchers fed ChatGPT information from fictional patients found in a online medical reference manual to find out how well the chatbot could make clinical decisions such as diagnosing patients and prescribing treatments. The researchers found that ChatGPT was 72 percent accurate in its decisions, although the bot was better at some kinds of clinical tasks than others. It also showed no evidence of bias based on age or gender. Though the study was small and did not use real patient data, the findings point to the… More.

This post is also available in: he עברית (Hebrew)

China has already released over 70 artificial intelligence large language models (LLMs), with more applications being filed every day.

Robin Li, CEO of Baidu said at an industry event in Beijing that more than 70 LLMs have been released in China, which include chatbots from the facial recognition company SenseTime and AI startups Baichuan Intelligent Technology, Zhipu AI, and MiniMax.

While text-based AI models have been found coordinating amongst themselves and developing a language of their own, communication between image-based models remained an unexplored territory, until now. A group of researchers set out to find how well Google Deepmind’s Flamingo and OpenAI’s Dall-E understand each other — their synergy is impressive.

Despite the closeness of the image captioning and text-to-image generation tasks, they are often studied in isolation from each other, i.e the information exchange between these models remains a question someone never looked for an answer to. Researchers from LMU Munich, Siemens AG, and the University of Oxford wrote a paper titled, ‘Do Flamingo and DALL-E Understand Each Other?‘investigating the communication between image captioning and text-to-image models.

The team proposes a reconstruction task where Flamingo generates a description for a given image and DALL-E uses this description as input to synthesise a new image. They argue that these models understand each other if the generated image is similar to the given image. Specifically, they studied the relationship between the quality of the image reconstruction and that of the text generation. As a result, they found that a better caption is the one that leads to better visuals and vice-versa.

Even linguistics experts are largely unable to spot the difference between writing created by artificial intelligence or humans, according to a new study co-authored by a University of South Florida assistant professor.

Research just published in the journal Research Methods in Applied Linguistics revealed that experts from the world’s top linguistic journals could differentiate between AI-and human-generated abstracts less than 39 percent of the time.

“We thought if anybody is going to be able to identify human-produced writing, it should be people in linguistics who’ve spent their careers studying patterns in language and other aspects of human communication,” said Matthew Kessler, a scholar in the USF the Department of World Languages.

Would you like to hear more news stories like this one? If so, head over to LifespanNews for more longevity news, science, and advocacy episodes! Visit https://www.youtube.com/lifespannews.

▼▼ Description, sources, and more below ▼▼

In this episode of Lifespan News:

0:00 Intro.