Toggle light / dark theme

High-Risk, High-Payoff Bio-Research For National Security Challenges — Dr. David A. Markowitz, Ph.D., IARPA


Dr. David A. Markowitz, Ph.D. (https://www.markowitz.bio/) is a Program Manager at the Intelligence Advanced Research Projects Activity (IARPA — https://www.iarpa.gov/) which is an organization that invests in high-risk, high-payoff research programs to tackle some of the most difficult challenges of the agencies and disciplines in the U.S. Intelligence Community (IC).

IARPA’s mission is to push the boundaries of science to develop solutions that empower the U.S. IC to do its work better and more efficiently for national security. IARPA does not have an operational mission and does not deploy technologies directly to the field, but instead, they facilitate the transition of research results to IC customers for operational application.

GitHub Copilot dubs itself as an “AI pair programmer” for software developers, automatically suggesting code in real time. According to GitHub, Copilot is “powered by Codex, a generative pretrained AI model created by OpenAI” and has been trained on “natural language text and source code from publicly available sources, including code in public repositories on GitHub.”

However, a class-action lawsuit filed against GitHub Copilot, its parent company Microsoft, and OpenAI claims open-source software piracy and violations of open-source licenses. Specifically, the lawsuit states that code generated by Copilot does not include any attribution to the original author of the code, copyright notices, or a copy of the license, which most open-source licenses require.

“The spirit of open source is not just a space where people want to keep it open,” says Sal Kimmich, an open-source developer advocate at Sonatype, machine-learning engineer, and open-source contributor and maintainer. “We have developed processes in order to keep open source secure, and that requires traceability, observability, and verification. Copilot is obscuring the original provenance of those [code] snippets.”

In May 2020, AI research laboratory OpenAI unveiled the largest neural network ever created—GPT-3—in a paper titled, ‘Language Models are Few Shot Learners’. The researchers released a beta API for users to toy with the system, giving birth to the new hype of generative AI.

People were generating eccentric results. The new language model could transform the description of a web page into the corresponding code. It emulates the human narrative, by either writing customised poetry or turning into a philosopher—predicting the true meaning of life. There’s nothing that the model can’t do. But there’s also a lot it can’t undo.

As GPT-3 isn’t that big of a deal for some, the name remains a bit ambiguous. The model could be a fraction of the futuristic bigger models that are yet to come.

Research in the field continues to focus on seizure prevention, prediction and treatment. Dr. Van Gompel predicts that the use of artificial intelligence and machine learning will help neurologists and neurosurgeons continue to move toward better treatment options and outcomes.

“I think we will continue to move more and more toward removing less and less brain,” says Dr. Van Gompel. “And in fact, I do believe in decades, we’ll understand stimulation enough that maybe we’ll never cut out brain again. Maybe we’ll be able to treat that misbehaving brain with electricity or something else. Maybe sometimes it’s drug delivery, directly into the area, that will rehabilitate that area to make it functional cortex again. That’s at least our hope.”

On the Mayo Clinic Q&A podcast, Dr. Van Gompel discusses the latest treatment options for epilepsy and what’s on the horizon in research.

Continuous-time neural networks are one subset of machine learning systems capable of taking on representation learning for spatiotemporal decision-making tasks. Continuous differential equations are frequently used to depict these models (DEs). Numerical DE solvers, however, limit their expressive potential when used on computers. The scaling and understanding of many natural physical processes, like the dynamics of neural systems, have been severely hampered by this restriction.

Inspired by the brains of microscopic creatures, MIT researchers have developed “liquid” neural networks, a fluid, robust ML model that can learn and adapt to changing situations. These methods can be used in safety-critical tasks such as driving and flying.

However, as the number of neurons and synapses in the model grows, the underlying mathematics becomes more difficult to solve, and the processing cost of the model rises.

Chemists have created nanorobots propelled by magnets that remove pollutants from water. The invention could be scaled up to provide a sustainable and affordable way of cleaning up contaminated water in treatment plants.

Martin Pumera at the University of Chemistry and Technology, Prague, in the Czech Republic and his colleagues developed the nanorobots by using a temperature-sensitive polymer material and iron oxide. The polymer acts like tiny hands that can pick up and dispose of pollutants in the water, while the iron oxide makes the nanorobots magnetic. The researchers also added oxygen and hydrogen atoms to the iron oxide that can attach onto target pollutants.

The robots are about 200 nanometres wide and are powered by magnetic fields, which allow the team to control their movements.

Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.

A supercomputer, providing massive amounts of computing power to tackle complex challenges, is typically out of reach for the average enterprise data scientist. However, what if you could use cloud resources instead? That’s the rationale that Microsoft Azure and Nvidia are taking with this week’s announcement designed to coincide with the SC22 supercomputing conference.

Nvidia and Microsoft announced that they are building a “massive cloud AI computer.” The supercomputer in question, however, is not an individually-named system, like the Frontier system at the Oak Ridge National Laboratory or the Perlmutter system, which is the world’s fastest Artificial Intelligence (AI) supercomputer. Rather, the new AI supercomputer is a set of capabilities and services within Azure, powered by Nvidia technologies, for high performance computing (HPC) uses.