Menu

Blog

Page 883

Feb 6, 2024

The Universe Can Bend Physics Laws on Its Own, According to Researchers

Posted by in category: space

Universe breaks physics laws all on its own.

Feb 6, 2024

AI Can Design Totally New Proteins From Scratch—It’s Time to Talk Biosecurity

Posted by in categories: biotech/medical, robotics/AI

Two decades ago, engineering designer proteins was a dream.

Now, thanks to AI, custom proteins are a dime a dozen. Made-to-order proteins often have specific shapes or components that give them abilities new to nature. From longer-lasting drugs and protein-based vaccines, to greener biofuels and plastic-eating proteins, the field is rapidly becoming a transformative technology.

Custom protein design depends on deep learning techniques. With large language models—the AI behind OpenAI’s blockbuster ChatGPT—dreaming up millions of structures beyond human imagination, the library of bioactive designer proteins is set to rapidly expand.

Feb 6, 2024

An AI Just Learned Language Through the Eyes and Ears of a Toddler

Posted by in categories: habitats, robotics/AI

For the next year and a half, the camera captured snippets of his life. He crawled around the family’s pets, watched his parents cook, and cried on the front porch with grandma. All the while, the camera recorded everything he heard.

What sounds like a cute toddler home video is actually a daring concept: Can AI learn language like a child? The results could also reveal how children rapidly acquire language and concepts at an early age.

A new study in Science describes how researchers used Sam’s recordings to train an AI to understand language. With just a tiny portion of one child’s life experience over a year, the AI was able to grasp basic concepts—for example, a ball, a butterfly, or a bucket.

Feb 6, 2024

Google’s Gemini AI Hints at the Next Great Leap for the Technology

Posted by in categories: information science, media & arts, robotics/AI

Google has launched Gemini, a new artificial intelligence system that can seemingly understand and speak intelligently about almost any kind of prompt—pictures, text, speech, music, computer code, and much more.

This type of AI system is known as a multimodal model. It’s a step beyond just being able to handle text or images like previous algorithms. And it provides a strong hint of where AI may be going next: being able to analyze and respond to real-time information from the outside world.

Although Gemini’s capabilities might not be quite as advanced as they seemed in a viral video, which was edited from carefully curated text and still-image prompts, it is clear that AI systems are rapidly advancing. They are heading towards the ability to handle more and more complex inputs and outputs.

Feb 6, 2024

Paper page — DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models

Posted by in category: mathematics

DeepSeekMath.

Pushing the limits of mathematical reasoning in open language models.


Join the discussion on this paper page.

Feb 6, 2024

What Physicists Have Been Missing

Posted by in category: quantum physics

An exciting new theory reconciles gravity and quantum physics. I think it’s wrong. But I may be too.

Feb 6, 2024

L5440a (1).pdf

Posted by in category: quantum physics

The new quantum logic.


Shared with Dropbox.

Feb 6, 2024

Perception, explained in 3 minutes

Posted by in category: futurism

Does perception exist outside of our own nervous system? Philosopher Alva Noë thinks so. We can visualize the back of a tomato, even if our eyes cannot see it. We aren’t offended by profane statements written in a language we aren’t fluent in. This is because our perception is based on more than our five senses; it relies on experience and context as well.

Alva Noë unpacks this puzzle with a few examples, from being able to visualize things we are not looking at, to a phenomenon called “change blindness.”

Ultimately, this information can be used to challenge our original understanding of perception, and can expand on the idea that the way one person assesses an object may not precisely match the assessment of another.

Feb 6, 2024

Mathematical model connects innovation and obsolescence to unify insights across diverse fields

Posted by in categories: innovation, mathematics

In Lewis Carroll’s Through the Looking-Glass, the Red Queen tells Alice, “It takes all the running you can do, to keep in the same place.” The race between innovation and obsolescence is like this.

Recent evidence about the slowing of technological and scientific progress in contrast to the accelerating epidemiological risks in a globalized world—in the opposite direction—indicates the importance of the relative rates of and obsolescence.

When does innovation outpace, or fail to outpace, obsolescence? Understanding this dynamic is nascent, and the way that innovation is discussed is largely fragmented across fields. Despite some qualitative efforts to bridge this gap, insights are rarely transferred.

Feb 6, 2024

A Meme’s Glimpse into the Pinnacle of Artificial Intelligence (AI) Progress in a Mamba Series: LLM Enlightenment

Posted by in category: robotics/AI

In the dynamic field of Artificial Intelligence (AI), the trajectory from one foundational model to another has represented an amazing paradigm shift. The escalating series of models, including Mamba, Mamba MOE, MambaByte, and the latest approaches like Cascade, Layer-Selective Rank Reduction (LASER), and Additive Quantization for Language Models (AQLM) have revealed new levels of cognitive power. The famous ‘Big Brain’ meme has succinctly captured this progression and has humorously illustrated the rise from ordinary competence to extraordinary brilliance as one delf into the intricacies of each language model.

Mamba

Mamba is a linear-time sequence model that stands out for its rapid inference capabilities. Foundation models are predominantly built on the Transformer architecture due to its effective attention mechanism. However, Transformers encounter efficiency issues when dealing with long sequences. In contrast to conventional attention-based Transformer topologies, with Mamba, the team introduced structured State Space Models (SSMs) to address processing inefficiencies on extended sequences.

Page 883 of 11,408First880881882883884885886887Last