Toggle light / dark theme

Large language models have advanced significantly in recent years (LLMs). Impressive LLMs have been revealed one after the other, beginning with OpenAI’s GPT-3, which generates exceptionally correct texts and ends with its open-source counterpart BLOOM. Language-related problems that were previously unsolvable had become simply a challenge for these systems.

All of this progress is made possible by the vast amount of data available on the Internet and the accessibility of powerful GPUs. As appealing as they may sound, training an LLM is an incredibly expensive procedure in terms of both data and technology needs. We’re talking about AI systems with billions of parameters, so feeding these models with enough data isn’t easy. However, once you do it, they give you a stunning performance.

Have you ever wondered where the development of “computing” gadgets began? Why did individuals devote so much time and energy to designing and constructing the first computers? We can presume it was not for the purpose of amusing people with video games or YouTube videos.

AI can also be of benefit in the diagnosis and treatment of patients. Tools have been created that help diagnose a patient as well as a human would.

AI isn’t a new technology—it’s been researched and developed since the 1950s and is currently present in many of our daily routines. Most of these applications are so common that we don’t even notice them.

Our lives often depend on the healthcare industry. So, having a technology that allows you to speed up patient registration processes and help diagnose more quickly and effectively is essential. Every health center should consider the use of AI for the benefit of its processes so it can adapt to the modern world and its accelerated pace.

While some might view the emergence of humanoids with apprehension, a future filled with robots is likely to be a positive development for most. But as with anything, policy and society must be ready if, and when, they arrive.


There is growing corporate interest in humanoid robots to replace human labor. Tesla’s recently unveiled Bumble C robot may mark a turning point in an industry that has thus far focused on specialized machines produced in limited quantities. Should Tesla succeed, what does a mass-produced humanoid robot mean for the future of humanity?

Visit our sponsor, Brilliant: https://brilliant.org/IsaacArthur/
Revolutionary improvements to automation and production may one day create machines able to produce almost anything, quickly and cheaply, and far faster and more varied than modern 3D Printers. Such devices are sometimes known as Santa Claus Machines, Cornucopia Devices, or Clanking Del-Replicators, and today we will examine how likely such technology is, how far off in the future they might be, and what impact they would have on society.

Visit our Website: http://www.isaacarthur.net.
Support us on Patreon: https://www.patreon.com/IsaacArthur.
or Paypal at: https://www.paypal.me/IsaacArthur.
SFIA Merchandise available: https://www.signil.com/sfia/

Social Media:
Facebook Group: https://www.facebook.com/groups/1583992725237264/
Reddit: https://www.reddit.com/r/IsaacArthur/
Twitter: https://twitter.com/Isaac_A_Arthur on Twitter and RT our future content.
SFIA Discord Server: https://discord.gg/53GAShE

Listen or Download the audio of this episode from Soundcloud: Episode’s Audio-only version: https://soundcloud.com/isaac-arthur-148927746/santa-claus-machine.

High-Risk, High-Payoff Bio-Research For National Security Challenges — Dr. David A. Markowitz, Ph.D., IARPA


Dr. David A. Markowitz, Ph.D. (https://www.markowitz.bio/) is a Program Manager at the Intelligence Advanced Research Projects Activity (IARPA — https://www.iarpa.gov/) which is an organization that invests in high-risk, high-payoff research programs to tackle some of the most difficult challenges of the agencies and disciplines in the U.S. Intelligence Community (IC).

IARPA’s mission is to push the boundaries of science to develop solutions that empower the U.S. IC to do its work better and more efficiently for national security. IARPA does not have an operational mission and does not deploy technologies directly to the field, but instead, they facilitate the transition of research results to IC customers for operational application.

GitHub Copilot dubs itself as an “AI pair programmer” for software developers, automatically suggesting code in real time. According to GitHub, Copilot is “powered by Codex, a generative pretrained AI model created by OpenAI” and has been trained on “natural language text and source code from publicly available sources, including code in public repositories on GitHub.”

However, a class-action lawsuit filed against GitHub Copilot, its parent company Microsoft, and OpenAI claims open-source software piracy and violations of open-source licenses. Specifically, the lawsuit states that code generated by Copilot does not include any attribution to the original author of the code, copyright notices, or a copy of the license, which most open-source licenses require.

“The spirit of open source is not just a space where people want to keep it open,” says Sal Kimmich, an open-source developer advocate at Sonatype, machine-learning engineer, and open-source contributor and maintainer. “We have developed processes in order to keep open source secure, and that requires traceability, observability, and verification. Copilot is obscuring the original provenance of those [code] snippets.”

In May 2020, AI research laboratory OpenAI unveiled the largest neural network ever created—GPT-3—in a paper titled, ‘Language Models are Few Shot Learners’. The researchers released a beta API for users to toy with the system, giving birth to the new hype of generative AI.

People were generating eccentric results. The new language model could transform the description of a web page into the corresponding code. It emulates the human narrative, by either writing customised poetry or turning into a philosopher—predicting the true meaning of life. There’s nothing that the model can’t do. But there’s also a lot it can’t undo.

As GPT-3 isn’t that big of a deal for some, the name remains a bit ambiguous. The model could be a fraction of the futuristic bigger models that are yet to come.

Research in the field continues to focus on seizure prevention, prediction and treatment. Dr. Van Gompel predicts that the use of artificial intelligence and machine learning will help neurologists and neurosurgeons continue to move toward better treatment options and outcomes.

“I think we will continue to move more and more toward removing less and less brain,” says Dr. Van Gompel. “And in fact, I do believe in decades, we’ll understand stimulation enough that maybe we’ll never cut out brain again. Maybe we’ll be able to treat that misbehaving brain with electricity or something else. Maybe sometimes it’s drug delivery, directly into the area, that will rehabilitate that area to make it functional cortex again. That’s at least our hope.”

On the Mayo Clinic Q&A podcast, Dr. Van Gompel discusses the latest treatment options for epilepsy and what’s on the horizon in research.

Continuous-time neural networks are one subset of machine learning systems capable of taking on representation learning for spatiotemporal decision-making tasks. Continuous differential equations are frequently used to depict these models (DEs). Numerical DE solvers, however, limit their expressive potential when used on computers. The scaling and understanding of many natural physical processes, like the dynamics of neural systems, have been severely hampered by this restriction.

Inspired by the brains of microscopic creatures, MIT researchers have developed “liquid” neural networks, a fluid, robust ML model that can learn and adapt to changing situations. These methods can be used in safety-critical tasks such as driving and flying.

However, as the number of neurons and synapses in the model grows, the underlying mathematics becomes more difficult to solve, and the processing cost of the model rises.