Toggle light / dark theme

AI tool uses face photos to estimate biological age and predict cancer outcomes

Eyes may be the window to the soul, but a person’s biological age could be reflected in their facial characteristics. Investigators from Mass General Brigham developed a deep learning algorithm called “FaceAge” that uses a photo of a person’s face to predict biological age and survival outcomes for patients with cancer.

They found that patients with , on average, had a higher FaceAge than those without and appeared about five years older than their .

Older FaceAge predictions were associated with worse overall across multiple cancer types. They also found that FaceAge outperformed clinicians in predicting short-term life expectancies of patients receiving palliative radiotherapy.

BabyBot: Soft robotic infant mimics feeding behaviors from birth to 6 months old

A combined team of roboticists from CREATE Lab, EPFL and Nestlé Research Lausanne, both in Switzerland, has developed a soft robot that was designed to mimic human infant motor development and the way infants feed.

In their paper published in the journal npj Robotics, the group describes how they used a variety of techniques to give their robot the ability to simulate the way human infants , from birth until approximately six months old.

Prior research has shown that it is difficult to develop invasive medical procedures for infants and babies due to the lack of usable test subjects. Methods currently in use, such as simulations, observational instruments and imaging tend to fall short due to their differences compared to real human infants. To overcome such problems, the team in Switzerland has designed, built, and tested a soft robotic infant that can be used for such purposes.

The Rise of Self-Improving AI Agents: Will It Surpass OpenAI?

What happens when AI starts improving itself without human input? Self-improving AI agents are evolving faster than anyone predicted—rewriting their own code, learning from mistakes, and inching closer to surpassing giants like OpenAI. This isn’t science fiction; it’s the AI singularity’s opening act, and the stakes couldn’t be higher.

How do self-improving agents work? Unlike static models such as GPT-4, these systems use recursive self-improvement—analyzing their flaws, generating smarter algorithms, and iterating endlessly. Projects like AutoGPT and BabyAGI already demonstrate eerie autonomy, from debugging code to launching micro-businesses. We’ll dissect their architecture and compare them to OpenAI’s human-dependent models. Spoiler: The gap is narrowing fast.

Why is OpenAI sweating? While OpenAI focuses on safety and scalability, self-improving agents prioritize raw, exponential growth. Imagine an AI that optimizes itself 24/7, mastering quantum computing over a weekend or cracking protein folding in hours. But there’s a dark side: no “off switch,” biased self-modifications, and the risk of uncontrolled superintelligence.

Who will dominate the AI race? We’ll explore leaked research, ethical debates, and the critical question: Can OpenAI’s cautious approach outpace agents that learn to outthink their creators? Like, subscribe, and hit the bell—the future of AI is rewriting itself.

Can self-improving AI surpass OpenAI? What are autonomous AI agents? How dangerous is recursive AI? Will AI become uncontrollable? Can we stop self-improving AI? This video exposes the truth. Watch now—before the machines outpace us.

#ai.

New study finds ‘simple selfie’ can help predict patients’ cancer survival

A selfie can be used as a tool to help doctors determine a patient’s “biological age” and judge how well they may respond to cancer treatment, a new study suggests.

Because humans age at “different rates” their physical appearance may help give insights into their so-called “biological age” – how old a person is physiologically, academics said.

The new FaceAge AI tool can estimate a person’s biological age, as opposed to their actual age, by scanning an image of their face, a new study found.

Shape-shifting joints could transform wearable devices and robotic movement

It’s easy to take joint mobility for granted. Without thinking, it’s simple enough to turn the pages of a book or bend to stretch out a sore muscle. Designers don’t have the same luxury. When building a joint, be it for a robot or wrist brace, designers seek customizability across all degrees of freedom but are often restricted by their versatility to adapt to different use contexts.

Researchers at Carnegie Mellon University’s College of Engineering have developed an algorithm to design metastructures that are reconfigurable across six degrees of freedom and allow for stiffness tunability. The algorithm can interpret the kinematic motions that are needed for multiple configurations of a device and assist designers in creating such reconfigurability. This advancement gives designers more over the functionality of joints for various applications.

The team demonstrated the structure’s versatile capabilities via multiple wearable devices tailored for unique movement functions, body areas, and uses.

AI model translates text commands into motion for diverse robots and avatars

Brown University researchers have developed an artificial intelligence model that can generate movement in robots and animated figures in much the same way that AI models like ChatGPT generate text.

A paper describing this work is published on the arXiv preprint server.

The model, called MotionGlot, enables users to simply type an action— walk forward a few steps and take a right— and the model can generate accurate representations of that motion to command a or animated avatar.

Like human brains, large language models reason about diverse data in a general way

While early language models could only process text, contemporary large language models now perform highly diverse tasks on different types of data. For instance, LLMs can understand many languages, generate computer code, solve math problems, or answer questions about images and audio.

MIT researchers probed the inner workings of LLMs to better understand how they process such assorted data, and found evidence that they share some similarities with the human brain.

Neuroscientists believe the human brain has a “semantic hub” in the anterior temporal lobe that integrates semantic information from various modalities, like visual data and tactile inputs. This semantic hub is connected to modality-specific “spokes” that route information to the hub. The MIT researchers found that LLMs use a similar mechanism by abstractly processing data from diverse modalities in a central, generalized way. For instance, a model that has English as its dominant language would rely on English as a central medium to process inputs in Japanese or reason about arithmetic, computer code, etc. Furthermore, the researchers demonstrate that they can intervene in a model’s semantic hub by using text in the model’s dominant language to change its outputs, even when the model is processing data in other languages.