Toggle light / dark theme

Researchers have not only built an AI child, but are now training AI using headcam footage from a human baby as well.

In a press release, New York University announced that its data science researchers had strapped a camera to the head of a real, live, human toddler for 18 months to see how much an AI model could learn from it.

Most large language models (LLMs), like OpenAI’s GPT-4 and its competitors, are trained on “astronomical amounts of language input” that are many times larger than what infants receive when learning to speak a language during the first years of their lives.

Two decades ago, engineering designer proteins was a dream.

Now, thanks to AI, custom proteins are a dime a dozen. Made-to-order proteins often have specific shapes or components that give them abilities new to nature. From longer-lasting drugs and protein-based vaccines, to greener biofuels and plastic-eating proteins, the field is rapidly becoming a transformative technology.

Custom protein design depends on deep learning techniques. With large language models—the AI behind OpenAI’s blockbuster ChatGPT—dreaming up millions of structures beyond human imagination, the library of bioactive designer proteins is set to rapidly expand.

For the next year and a half, the camera captured snippets of his life. He crawled around the family’s pets, watched his parents cook, and cried on the front porch with grandma. All the while, the camera recorded everything he heard.

What sounds like a cute toddler home video is actually a daring concept: Can AI learn language like a child? The results could also reveal how children rapidly acquire language and concepts at an early age.

A new study in Science describes how researchers used Sam’s recordings to train an AI to understand language. With just a tiny portion of one child’s life experience over a year, the AI was able to grasp basic concepts—for example, a ball, a butterfly, or a bucket.

Google has launched Gemini, a new artificial intelligence system that can seemingly understand and speak intelligently about almost any kind of prompt—pictures, text, speech, music, computer code, and much more.

This type of AI system is known as a multimodal model. It’s a step beyond just being able to handle text or images like previous algorithms. And it provides a strong hint of where AI may be going next: being able to analyze and respond to real-time information from the outside world.

Although Gemini’s capabilities might not be quite as advanced as they seemed in a viral video, which was edited from carefully curated text and still-image prompts, it is clear that AI systems are rapidly advancing. They are heading towards the ability to handle more and more complex inputs and outputs.

In the dynamic field of Artificial Intelligence (AI), the trajectory from one foundational model to another has represented an amazing paradigm shift. The escalating series of models, including Mamba, Mamba MOE, MambaByte, and the latest approaches like Cascade, Layer-Selective Rank Reduction (LASER), and Additive Quantization for Language Models (AQLM) have revealed new levels of cognitive power. The famous ‘Big Brain’ meme has succinctly captured this progression and has humorously illustrated the rise from ordinary competence to extraordinary brilliance as one delf into the intricacies of each language model.

Mamba

Mamba is a linear-time sequence model that stands out for its rapid inference capabilities. Foundation models are predominantly built on the Transformer architecture due to its effective attention mechanism. However, Transformers encounter efficiency issues when dealing with long sequences. In contrast to conventional attention-based Transformer topologies, with Mamba, the team introduced structured State Space Models (SSMs) to address processing inefficiencies on extended sequences.

The creation of an artificial intelligence (AI) system that can analyze retinal fundus images to detect chronic kidney disease (CKD) and type 2 diabetes mellitus (T2DM) represents a groundbreaking advancement in medical technology. This AI model, developed using a substantial dataset of retinal images and advanced convolutional neural networks, has demonstrated exceptional accuracy in identifying these conditions. Its capability extends beyond mere detection, as it also shows promise in predicting the progression of these diseases based on retinal imaging and clinical metadata.

A notable innovation of this AI system is its ability to analyze smartphone images. This feature significantly enhances the accessibility of sophisticated diagnostic tools, especially in regions with limited healthcare resources. The AI model paves the way for more widespread and convenient health screenings by enabling ubiquitous smartphone technology for medical imaging. This development is particularly impactful in enhancing healthcare delivery and access, as it brings critical diagnostic capabilities into the hands of more people, even in remote or underserved areas.

The AI’s proficiency in predicting the future development of CKD and T2DM is another aspect of its novelty. This predictive ability is crucial for timely intervention, potentially altering the trajectory of these chronic illnesses. Early detection and management are vital in battling CKD and T2DM, and this AI model’s predictive power could significantly improve patient outcomes.

https://youtu.be/tLtbWNi-Cgc?si=3i8BqTCAodSKnpkc

Micro Electro Mechanical Systems (MEMS) are miniature devices that integrate mechanical elements, sensors, actuators, and electronics on a single silicon chip. These systems serve diverse applications, such as accelerometers in smartphones, gyroscopes in navigation systems, and pressure sensors in medical devices. MEMS devices can detect and respond to environmental changes, enabling the creation of smart, responsive technologies. Their small size, low power consumption, and ability to perform various functions make MEMS crucial in fields like telecommunications, healthcare, automotive, and consumer electronics. Learn more about this tiny machines with this video!

#science #technology #microscopic #nanotechnology #robotics #engineering