Toggle light / dark theme

The robot guide dog possesses the ability to respond to tugs on a leash.

Researchers have created a robot guide dog to make life easier for the visually impaired with its ability to respond to tugs on a leash. The team of engineers at Binghamton University’s Computer Science Department in New York State has been developing a robotic seeing-eye dog to improve accessibility for those who are visually impaired. Last year, they performed a trick-or-treating exercise with its quadruped robotic dog.

Now, they have demonstrated a robot dog leading a person down a lab hallway, confidently and carefully reacting to directive instructions. Engineers were surprised that throughout the visually impaired… More.


Stephen Folkerts ‘24.

Human decision-making has been the focus of countless neuroscience studies, which try to identify the neural circuits and brain regions that support different types of decisions. Some of these research efforts focus on the choices humans make while gambling and taking risks, yet the neural underpinnings of these choices have not yet been fully elucidated.

Researchers at University of Louisville carried out a study aimed at better understanding the patterns in neural network communication associated with ‘bad’ decisions made while gambling. Their paper, published in Frontiers in Neuroscience, shows that different types of ‘bad’ decisions made while gambling, namely avoidant and approach decisions, are associated with distinct neural communication patterns.

“Our recent work follows a line of research that examines how humans approach rewarding and punishing situations in the environment,” Brendan Depue and Siraj Lyons, the researchers who carried out the study, told Medical Xpress.

They also recognised that AI itself may exhibit certain biases, and different settings it was deployed with were able to dramatically change output, in extreme cases rendering it unusable. In other words, setting the bots up correctly is a prerequisite to success. At least today.

So, for the time being, I think we’re going to see a rapid rise in human-AI cooperation rather than outright replacement.

However, it’s also difficult to escape the impression that through it we will be raising our successors and, in not so distant future, humans will be limited to only setting goals for AI to accomplish, while mastering programming languages will be akin to learning Latin.

In a study of more than 2,000 chest X-rays, radiologists outperformed AI in accurately identifying the presence and absence of three common lung diseases, according to a study published in Radiology, a journal of the Radiological Society of North America (RSNA).

“Chest radiography is a common diagnostic tool, but significant training and experience is required to interpret exams correctly,” said lead researcher Louis L. Plesner, M.D., resident radiologist and Ph.D. fellow in the Department of Radiology at Herlev and Gentofte Hospital in Copenhagen, Denmark.

While commercially available and FDA-approved AI tools are available to assist radiologists, Dr. Plesner said the clinical use of deep-learning-based AI tools for radiological diagnosis is in its infancy.

Machine learning is essential to designing the polymers, Murthy emphasizes, because they must be tailored to the specific gene therapy.

“There’s a tight interplay between the payload and in vivo mechanism of action, and the delivery vehicle needed to bring [the therapy] to that location,” he says. “You can’t have one without the other, so they have to be integrated at an early stage.”

The company hopes to use machine learning to explore the polymer design space, giving them a starting point to design a polymer. Subsequently, as the gene therapy moves from the preclinical to clinical stage, they can use artificial intelligence to tweak the polymer to make the therapy work better.

Progress update: Our latest AlphaFold model shows significantly improved accuracy and expands coverage beyond proteins to other biological molecules, including ligands.

Since its release in 2020, AlphaFold has revolutionized how proteins and their interactions are understood. Google DeepMind and Isomorphic Labs have been working together to build the foundations of a more powerful AI model that expands coverage beyond just proteins to the full range of biologically-relevant molecules.

Today we’re sharing an update on progress towards the next generation of AlphaFold. Our latest model can now generate predictions for nearly all molecules in the Protein Data Bank (PDB), frequently reaching atomic accuracy.

In a new study, Deepmind and colleagues at Isomorphic Labs show early results from a new version of AlphaFold that brings fully automated structure prediction of biological molecules closer to reality.

The Google Deepmind AlphaFold and Isomorphic Labs team today unveiled the latest AlphaFold model. According to the companies, the updated model can now predict the structure of almost any molecule in the Protein Data Bank (PDB), often with atomic accuracy. This development, they say, is an important step towards a better understanding of the complex biological mechanisms within cells.

Since its launch in 2020, AlphaFold has influenced protein structure prediction worldwide. The latest version of the model goes beyond proteins to include a wide range of biologically relevant molecules such as ligands, nucleic acids and post-translational modifications. These structures are critical to understanding biological mechanisms in cells and have been difficult to predict with high accuracy, according to Deepmind.

Even as toddlers, we have an uncanny ability to turn what we learn about the world into concepts. With just a few examples, we form an idea of what makes a “dog” or what it means to “jump” or “skip.” These concepts are effortlessly mixed and matched inside our heads, resulting in a toddler pointing at a prairie dog and screaming, “But that’s not a dog!”

Last week, a team from New York University created an AI model that mimics a toddler’s ability to generalize language learning. In a nutshell, generalization is a sort of flexible thinking that lets us use newly learned words in new contexts—like an older millennial struggling to catch up with Gen Z lingo.

When pitted against adult humans in a language task for generalization, the model matched their performance. It also beat GPT-4, the AI algorithm behind ChatGPT.