Toggle light / dark theme

ChipNeMo: NVIDIA’s ChatGPT-like AI chatbot for semiconductors

NVIDIA looking to ramp up chip production in the face of supply shortage.

NVIDIA, the most profitable chip-making company in the world, has unveiled a custom large language model, the technology on which artificial intelligence tools like ChatGPT are based, which the company has developed for their internal use.

Trained on NVIDIA’s proprietary data, “ChipNeMo” will generate and optimize software and provide assistance to human designers in building semiconductors. Developed by NVIDIA researchers, ChipNeMo would be highly beneficial in the context of the company’s work in graphics processing, artificial intelligence, and other technologies.

Engineers create a robotic eye-seeing dog to aid the visually impaired

The robot guide dog possesses the ability to respond to tugs on a leash.

Researchers have created a robot guide dog to make life easier for the visually impaired with its ability to respond to tugs on a leash. The team of engineers at Binghamton University’s Computer Science Department in New York State has been developing a robotic seeing-eye dog to improve accessibility for those who are visually impaired. Last year, they performed a trick-or-treating exercise with its quadruped robotic dog.

Now, they have demonstrated a robot dog leading a person down a lab hallway, confidently and carefully reacting to directive instructions. Engineers were surprised that throughout the visually impaired… More.


Stephen Folkerts ‘24.

The team of engineers at Binghamton University’s Computer Science Department in New York State has been developing a robotic seeing-eye dog to improve accessibility for those who are visually impaired. Last year, they performed a trick-or-treating exercise with its quadruped robotic dog.

Approaching and avoiding ‘bad’ decisions are linked with different neural communication patterns

Human decision-making has been the focus of countless neuroscience studies, which try to identify the neural circuits and brain regions that support different types of decisions. Some of these research efforts focus on the choices humans make while gambling and taking risks, yet the neural underpinnings of these choices have not yet been fully elucidated.

Researchers at University of Louisville carried out a study aimed at better understanding the patterns in neural network communication associated with ‘bad’ decisions made while gambling. Their paper, published in Frontiers in Neuroscience, shows that different types of ‘bad’ decisions made while gambling, namely avoidant and approach decisions, are associated with distinct neural communication patterns.

“Our recent work follows a line of research that examines how humans approach rewarding and punishing situations in the environment,” Brendan Depue and Siraj Lyons, the researchers who carried out the study, told Medical Xpress.

Team of AI bots develops software in 7 minutes instead of 4 weeks

They also recognised that AI itself may exhibit certain biases, and different settings it was deployed with were able to dramatically change output, in extreme cases rendering it unusable. In other words, setting the bots up correctly is a prerequisite to success. At least today.

So, for the time being, I think we’re going to see a rapid rise in human-AI cooperation rather than outright replacement.

However, it’s also difficult to escape the impression that through it we will be raising our successors and, in not so distant future, humans will be limited to only setting goals for AI to accomplish, while mastering programming languages will be akin to learning Latin.

Radiologists outperformed AI in identifying lung diseases on chest X-ray

In a study of more than 2,000 chest X-rays, radiologists outperformed AI in accurately identifying the presence and absence of three common lung diseases, according to a study published in Radiology, a journal of the Radiological Society of North America (RSNA).

“Chest radiography is a common diagnostic tool, but significant training and experience is required to interpret exams correctly,” said lead researcher Louis L. Plesner, M.D., resident radiologist and Ph.D. fellow in the Department of Radiology at Herlev and Gentofte Hospital in Copenhagen, Denmark.

While commercially available and FDA-approved AI tools are available to assist radiologists, Dr. Plesner said the clinical use of deep-learning-based AI tools for radiological diagnosis is in its infancy.

Revolutionizing Gene Therapy Delivery

Machine learning is essential to designing the polymers, Murthy emphasizes, because they must be tailored to the specific gene therapy.

“There’s a tight interplay between the payload and in vivo mechanism of action, and the delivery vehicle needed to bring [the therapy] to that location,” he says. “You can’t have one without the other, so they have to be integrated at an early stage.”

The company hopes to use machine learning to explore the polymer design space, giving them a starting point to design a polymer. Subsequently, as the gene therapy moves from the preclinical to clinical stage, they can use artificial intelligence to tweak the polymer to make the therapy work better.

A glimpse of the next generation of AlphaFold

Progress update: Our latest AlphaFold model shows significantly improved accuracy and expands coverage beyond proteins to other biological molecules, including ligands.

Since its release in 2020, AlphaFold has revolutionized how proteins and their interactions are understood. Google DeepMind and Isomorphic Labs have been working together to build the foundations of a more powerful AI model that expands coverage beyond just proteins to the full range of biologically-relevant molecules.

Today we’re sharing an update on progress towards the next generation of AlphaFold. Our latest model can now generate predictions for nearly all molecules in the Protein Data Bank (PDB), frequently reaching atomic accuracy.

Google Deepmind shows the next generation of AlphaFold

In a new study, Deepmind and colleagues at Isomorphic Labs show early results from a new version of AlphaFold that brings fully automated structure prediction of biological molecules closer to reality.

The Google Deepmind AlphaFold and Isomorphic Labs team today unveiled the latest AlphaFold model. According to the companies, the updated model can now predict the structure of almost any molecule in the Protein Data Bank (PDB), often with atomic accuracy. This development, they say, is an important step towards a better understanding of the complex biological mechanisms within cells.

Since its launch in 2020, AlphaFold has influenced protein structure prediction worldwide. The latest version of the model goes beyond proteins to include a wide range of biologically relevant molecules such as ligands, nucleic acids and post-translational modifications. These structures are critical to understanding biological mechanisms in cells and have been difficult to predict with high accuracy, according to Deepmind.