Toggle light / dark theme

Generative AI’s future in enterprise could be smaller, more focused language models

The amazing.

But maybe the future of these models is more focused than the boil-the-ocean approach we’ve seen from OpenAI and others, who want to be able to answer every question under the sun.


The amazing abilities of OpenAI’s ChatGPT wouldn’t be possible without large language models. These models are trained on billions, sometimes trillions of examples of text. The idea behind ChatGPT is to understand language so well, it can anticipate what word plausibly comes next in a split second. That takes a ton of training, compute resources and developer savvy to make happen.

In the AI-driven future, each company’s own data could be its most valuable asset. If you’re an insurance company, you have a completely different lexicon than a hospital, automotive company or a law firm, and when you combine that with your customer data and the full body of content across the organization, you have a language model. While perhaps it’s not large, as in the truly large language model sense, it would be just the model you need, a model created for one and not for the masses.

How AI Can Look Into Your Eyes And Diagnose A Devastating Brain Disease

“The eyes are the windows to the soul.” It’s an ancient saying, and it illustrates what we know intuitively to be true — you can understand so much about a person by looking them deep in the eye. But how? And can we use this fact to understand disease?

One company is making big strides in this direction. Israel’s NeuraLight, which just won the Health and Medtech Innovation award at SXSW, was founded to bring science and AI to understanding the brain through the eyes.

A focal disease for NeuraLight is ALS, which is currently diagnosed through a subjective survey of about a dozen questions, followed by tests such as an EEG and MRI.


The patient’s eyes follow dots on a screen, and the AI system measures 106 parameters such as dilation and blink rate in less than 10 minutes. In other words, this will be an AI-enabled digital biomarker.

We Should Consider ChatGPT Signal For Manhattan Project 2.0

In 1942 The Manhattan Project was established by the United States as part of a top-secret research and development (R&D) program to produce the first nuclear weapons. The project involved thousands of scientists, engineers, and other personnel who worked on different aspects of the project, including the development of nuclear reactors, the enrichment of uranium, and the design and construction of the bomb. The goal: to develop an atomic bomb before Germany did.

The Manhattan Project set a precedent for large-scale government-funded R&D programs. It also marked the beginning of the nuclear age and ushered in a new era of technological and military competition between the world’s superpowers.

Today we’re entering the age of Artificial Intelligence (AI)—an era arguably just as important, if not more important, than the age of nuclear war. While the last few months might have been the first you’ve heard about it, many in the field would argue we’ve been headed in this direction for at least the last decade, if not longer. For those new to the topic: welcome to the future, you’re late.

New AI tool can generate faster, accurate and sharper cosmic images

The team was able to produce blur-free, high-resolution images of the universe by incorporating this AI algorithm.

Before reaching ground-based telescopes, cosmic light interacts with the Earth’s atmosphere. That’s why, the majority of advanced ground-based telescopes are located at high altitudes on Earth, where the atmosphere is thinner. The Earth’s changing atmosphere often obscures the view of the universe.

The atmosphere obstructs certain wavelengths as well as distorts the light coming from great distances. This interference may interfere with the accurate construction of space images, which is critical for unraveling the mysteries of the universe. The produced blurry images may obscure the shapes of astronomical objects and cause measurement errors.

MIT’s Codon compiler allows Python to ‘speak’ natively with computers

Researchers at MIT created Codon, which dramatically increases the speed of Python code by allowing users to run it as effectively as C or C++.

Python is one of the most popular computer languages, but it has a severe Achilles heel; it can be cumbersome compared to lower-level languages like C or C++. To rectify this, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) set out to change this through the development of Codon. This Python-based compiler allows users to write Python code that runs as efficiently as a program in C or C++.


Arsenii Palivoda/iStock.

In order to identify the type at runtime, Saman Amarasinghe, an MIT professor and lead investigator for the CSAIL who is also a co-author of the Codon paper, notes that “if you have a dynamic language [like Python], every time you have some data, you need to keep a lot of additional metadata around it.”

AI chip race: Google says its Tensor chips compute faster than Nvidia’s A100

It also says that it has a healthy pipeline for chips in the future.

Search engine giant Google has claimed that the supercomputers it uses to develop its artificial intelligence (AI) models are faster and more energy efficient than Nvidia Corporation’s. While processing power for most companies delving into the AI space comes from Nvidia’s chips, Google uses a custom chip called Tensor Processing Unit (TPU).

Google announced its Tensor chips during the peak of the COVID-19 pandemic when businesses from electronics to automotive faced the pinch of chip shortage.


AI-designed chips to further AI development

Interesting Engineering reported in 2021 that Google used AI to design its TPUs. Google claimed that the design process was completed in just six hours using AI compared to the months humans spend designing chips.

For most things associated with AI these days, product iterations occur rapidly, and the TPU is currently in its fourth generation. As Microsoft stitched together chips to power OpenAI’s research requirement, Google also put together 4,000 TPUs to make its supercomputer.

Mind control: 3D-patterned sensors allow robots to be controlled by thought

This novel technology looks like a sci-fi device. But it’s real.

It seems like something from a science fiction movie: a specialized, electronic headband and using your mind to control a robot.


Oonal/iStock.

A new study published in the journal ACS Applied Nano Materials took a step toward making this a reality. The team produced “dry” sensors that can record the brain’s electrical activity despite the hair and the bumps and curves of the head by constructing a specific, 3D-patterned structure that does not rely on sticky conductive gels.

New Stanford report highlights the potential, costs, and risks of AI

AI-related jobs are on the rise but funding has taken a dip.

The technology world goes through waves of terminologies. Last year, was much about building the metaverse until it turned to artificial intelligence (AI) which has occupied the top news spots almost everywhere. To know whether this wave will last or wither off, one needs to look at some trusted sources in the domain, such as the one released by Stanford University.

For years now, the Institute for Human-Centered Artificial Intelligence at Stanford has been releasing its AI Index on an annual basis.


Black_Kira/iStock.

With AI occupying center stage for the past few months, the AI Index is a valuable resource to see what the future holds.

Joe Biden warns of AI dangers, urges tech companies to make safe products

Social media has already illustrated the harm that powerful technologies can do without the right safeguards, said the President.

U.S. President Joe Biden warned of the dangers of using artificial intelligence (AI) while putting the onus on making safe products on technology companies.


The White House/ Wikimedia Commons.

Although Musk was one of the co-founders of OpenAI. Recently the Tesla CEO co-signed a letter asking for a six-month moratorium on AI research and stopping companies from releasing products more powerful than GPT-4.

Stephen Wolfram on AI’s rapid progress & the “Post-Knowledge Work Era” | E1711

(0:00) Nick kicks off the show.
(1:24) Under the hood of ChatGPT
(7:53) What is a neural net?
(10:05) Cast.ai — Get a free cloud cost audit with a personal consultation at https://cast.ai/twist.
(11:33) Determining values and weights in a neural net.
(18:28) Vanta — Get $1000 off your SOC 2 at https://vanta.com/twist.
(19:33) Emulating the human brain.
(23:26) Defining computational irreducibility.
(26:14) Emergent behavior and the rules of language.
(31:49) Discovering logic + creating a computational language.
(38:10) Clumio — Start a free backup, or sign up for a demo at https://clumio.com/twist.
(39:38) Wolfram’s ChatGPT plugin.
(43:46) The rapid pace of AI
(58:45) The “Post-Knowledge Work” era.
(1:03:52) The unintended consequences of AI
(1:11:45) Rewarding innovation.
(1:16:12) The possibility of AGI
(1:20:07) Creating a general-purpose robotic system.

Check out Wolfram Research: https://www.wolfram.com/

FOLLOW Stephen: https://twitter.com/stephen_wolfram.
FOLLOW Jason: https://linktr.ee/calacanis.

Thanks to our partners:
(10:05) Cast.ai — Get a free cloud cost audit with a personal consultation at https://cast.ai/twist.
(18:28) Vanta — Get $1000 off your SOC 2 at https://vanta.com/twist.
(38:10) Clumio — Start a free backup, or sign up for a demo at https://clumio.com/twist.

Listen here:
Apple: https://podcasts.apple.com/us/podcast/this-week-in-startups-audio/id315114957
Spotify: https://open.spotify.com/show/6ULQ0ewYf5zmsDgBchlkr9
Overcast: https://overcast.fm/itunes315114957/this-week-in-startups-audio.