Toggle light / dark theme

Deep within our brain’s temporal lobes, two almond-shaped cell masses help keep us alive. This tiny region, called the amygdala, assists with a variety of brain activities. It helps us learn and remember. It triggers our fight-or-flight response. It even promotes the release of a feel-good chemical called dopamine.

Scientists have learned all this by studying the amygdala over hundreds of years. But we still haven’t reached a full understanding of how these processes work.

Now, Cold Spring Harbor Laboratory neuroscientist Bo Li has brought us several important steps closer. His lab recently made a series of discoveries that show how called somatostatin-expressing (Sst+) central amygdala (CeA) neurons help us learn about threats and rewards. He also demonstrated how these neurons relate to dopamine. The discoveries could lead to future treatments for anxiety or .

The amazing.

But maybe the future of these models is more focused than the boil-the-ocean approach we’ve seen from OpenAI and others, who want to be able to answer every question under the sun.


The amazing abilities of OpenAI’s ChatGPT wouldn’t be possible without large language models. These models are trained on billions, sometimes trillions of examples of text. The idea behind ChatGPT is to understand language so well, it can anticipate what word plausibly comes next in a split second. That takes a ton of training, compute resources and developer savvy to make happen.

In the AI-driven future, each company’s own data could be its most valuable asset. If you’re an insurance company, you have a completely different lexicon than a hospital, automotive company or a law firm, and when you combine that with your customer data and the full body of content across the organization, you have a language model. While perhaps it’s not large, as in the truly large language model sense, it would be just the model you need, a model created for one and not for the masses.

“The eyes are the windows to the soul.” It’s an ancient saying, and it illustrates what we know intuitively to be true — you can understand so much about a person by looking them deep in the eye. But how? And can we use this fact to understand disease?

One company is making big strides in this direction. Israel’s NeuraLight, which just won the Health and Medtech Innovation award at SXSW, was founded to bring science and AI to understanding the brain through the eyes.

A focal disease for NeuraLight is ALS, which is currently diagnosed through a subjective survey of about a dozen questions, followed by tests such as an EEG and MRI.


The patient’s eyes follow dots on a screen, and the AI system measures 106 parameters such as dilation and blink rate in less than 10 minutes. In other words, this will be an AI-enabled digital biomarker.

It also says that it has a healthy pipeline for chips in the future.

Search engine giant Google has claimed that the supercomputers it uses to develop its artificial intelligence (AI) models are faster and more energy efficient than Nvidia Corporation’s. While processing power for most companies delving into the AI space comes from Nvidia’s chips, Google uses a custom chip called Tensor Processing Unit (TPU).

Google announced its Tensor chips during the peak of the COVID-19 pandemic when businesses from electronics to automotive faced the pinch of chip shortage.


AI-designed chips to further AI development

Interesting Engineering reported in 2021 that Google used AI to design its TPUs. Google claimed that the design process was completed in just six hours using AI compared to the months humans spend designing chips.

For most things associated with AI these days, product iterations occur rapidly, and the TPU is currently in its fourth generation. As Microsoft stitched together chips to power OpenAI’s research requirement, Google also put together 4,000 TPUs to make its supercomputer.

Patreon: https://www.patreon.com/daveshap.
GitHub: https://github.com/daveshap.
Cognitive AI Lab Discord: https://discord.gg/yqaBG5rh4j.

Artificial Sentience Reddit: https://www.reddit.com/r/ArtificialSentience/
Heuristic Imperatives Reddit: https://www.reddit.com/r/HeuristicImperatives/

DISCLAIMER: This video is not medical, financial, or legal advice. This is just my personal story and research findings. Always consult a licensed professional.

I work to better myself and the rest of humanity.

A team of biomedical researchers has developed a non-invasive, more accurate, and inexpensive “aging clock” for tracking and slowing human aging by examining retinal images and using trained deep-learning models of the eye’s fundus (the deepest area of the eye), using a new “eyeAge” system.

The researchers are affiliated with Buck Institute for Research on Aging, Google Research, Google Health, Zuckerberg San Francisco General Hospital, Post Graduate Institute of Medical Education, and Research (India), and University of California, San Francisco.

Tracking eye changes that accompany aging and age-related diseases: the eyeAge system.

Scientists have found an antibiotic-free way of treating ‘golden staph’ skin infections that are the scourge of some cancer patients, and a threat to hospital-goers everywhere.

The lab study from researchers at the University of Copenhagen utilized an artificial version of an enzyme that’s naturally produced by bacteriophages (viruses that infect bacteria), and used it to eradicate Staphylococcus aureus, or golden staph, in biopsy samples from people with skin lymphoma.

“To people who are severely ill with skin lymphoma, staphylococci can be a huge, sometimes insoluble problem, as many are infected with a type of Staphylococcus aureus that is resistant to antibiotics,” explains immunologist Niels Ødum of the University of Copenhagen.