Toggle light / dark theme

It’s strange times we’re living in when a blockchain billionaire, not the usual Big Tech suspects, is the one supplying the compute capacity needed to develop generative AI.

It was revealed yesterday that Jed McCaleb, the co-founder of blockchain startups Stellar, Ripple and Mt Gox and aerospace company Vast, launched a 501©(3) nonprofit that purchased 24,000 Nvidia H100 GPUs to build data centers that’ll lease capacity to AI projects.

Already, the cluster of GPUs — worth an estimated half a billion dollars and among the largest in the world — is being used by startups Imbue and Character.ai for AI model experimentation, claims Eric Park, the CEO of a newly formed organization, Voltage Park, that’s charged with running and managing the data centers.

Siemens and Microsoft team up to introduce the Siemens Industrial Copilot and enhance human-machine collaboration.

Microsoft and Siemens have unveiled a groundbreaking partnership aimed at leveraging generative AI to revolutionize industries on a global scale, Microsoft announced.

This collaboration introduces the Siemens Industrial Copilot, an AI-powered assistant designed to enhance human-machine cooperation, particularly in the manufacturing sector significantly. The joint initiative represents a substantial leap forward in AI technology, intending to accelerate innovation and streamline operations across diverse industries.

NVIDIA looking to ramp up chip production in the face of supply shortage.

NVIDIA, the most profitable chip-making company in the world, has unveiled a custom large language model, the technology on which artificial intelligence tools like ChatGPT are based, which the company has developed for their internal use.

Trained on NVIDIA’s proprietary data, “ChipNeMo” will generate and optimize software and provide assistance to human designers in building semiconductors. Developed by NVIDIA researchers, ChipNeMo would be highly beneficial in the context of the company’s work in graphics processing, artificial intelligence, and other technologies.

The robot guide dog possesses the ability to respond to tugs on a leash.

Researchers have created a robot guide dog to make life easier for the visually impaired with its ability to respond to tugs on a leash. The team of engineers at Binghamton University’s Computer Science Department in New York State has been developing a robotic seeing-eye dog to improve accessibility for those who are visually impaired. Last year, they performed a trick-or-treating exercise with its quadruped robotic dog.

Now, they have demonstrated a robot dog leading a person down a lab hallway, confidently and carefully reacting to directive instructions. Engineers were surprised that throughout the visually impaired… More.


Stephen Folkerts ‘24.

The team of engineers at Binghamton University’s Computer Science Department in New York State has been developing a robotic seeing-eye dog to improve accessibility for those who are visually impaired. Last year, they performed a trick-or-treating exercise with its quadruped robotic dog.

Human decision-making has been the focus of countless neuroscience studies, which try to identify the neural circuits and brain regions that support different types of decisions. Some of these research efforts focus on the choices humans make while gambling and taking risks, yet the neural underpinnings of these choices have not yet been fully elucidated.

Researchers at University of Louisville carried out a study aimed at better understanding the patterns in neural network communication associated with ‘bad’ decisions made while gambling. Their paper, published in Frontiers in Neuroscience, shows that different types of ‘bad’ decisions made while gambling, namely avoidant and approach decisions, are associated with distinct neural communication patterns.

“Our recent work follows a line of research that examines how humans approach rewarding and punishing situations in the environment,” Brendan Depue and Siraj Lyons, the researchers who carried out the study, told Medical Xpress.

They also recognised that AI itself may exhibit certain biases, and different settings it was deployed with were able to dramatically change output, in extreme cases rendering it unusable. In other words, setting the bots up correctly is a prerequisite to success. At least today.

So, for the time being, I think we’re going to see a rapid rise in human-AI cooperation rather than outright replacement.

However, it’s also difficult to escape the impression that through it we will be raising our successors and, in not so distant future, humans will be limited to only setting goals for AI to accomplish, while mastering programming languages will be akin to learning Latin.

In a study of more than 2,000 chest X-rays, radiologists outperformed AI in accurately identifying the presence and absence of three common lung diseases, according to a study published in Radiology, a journal of the Radiological Society of North America (RSNA).

“Chest radiography is a common diagnostic tool, but significant training and experience is required to interpret exams correctly,” said lead researcher Louis L. Plesner, M.D., resident radiologist and Ph.D. fellow in the Department of Radiology at Herlev and Gentofte Hospital in Copenhagen, Denmark.

While commercially available and FDA-approved AI tools are available to assist radiologists, Dr. Plesner said the clinical use of deep-learning-based AI tools for radiological diagnosis is in its infancy.

Machine learning is essential to designing the polymers, Murthy emphasizes, because they must be tailored to the specific gene therapy.

“There’s a tight interplay between the payload and in vivo mechanism of action, and the delivery vehicle needed to bring [the therapy] to that location,” he says. “You can’t have one without the other, so they have to be integrated at an early stage.”

The company hopes to use machine learning to explore the polymer design space, giving them a starting point to design a polymer. Subsequently, as the gene therapy moves from the preclinical to clinical stage, they can use artificial intelligence to tweak the polymer to make the therapy work better.

Progress update: Our latest AlphaFold model shows significantly improved accuracy and expands coverage beyond proteins to other biological molecules, including ligands.

Since its release in 2020, AlphaFold has revolutionized how proteins and their interactions are understood. Google DeepMind and Isomorphic Labs have been working together to build the foundations of a more powerful AI model that expands coverage beyond just proteins to the full range of biologically-relevant molecules.

Today we’re sharing an update on progress towards the next generation of AlphaFold. Our latest model can now generate predictions for nearly all molecules in the Protein Data Bank (PDB), frequently reaching atomic accuracy.