Toggle light / dark theme

AI is all that matters now, and reaching Agi before 2030 is all that matters for this decade.


A substantial percentage of the human clinical trials, including those evaluating investigational anti-aging drugs, fail in Phase II, a phase where the efficacy of the drug is tested. This poor success is in part due to inadequate target choice and the inability to identify a group of patients who will most likely respond to specific agents. This challenge is further complicated by the differences in the biological age of the patients, as the importance of therapeutic targets varies between the age groups. Unfortunately, most targets are discovered without considering patients’ age and being tested in a relatively younger population (average age in phase I is 24). Hence, identifying potential targets that are implicated in multiple age-associated diseases, and also play a role in the basic biology of aging, may have substantial benefits.

Identifying dual-purpose targets that are implicated in aging and disease at the same time will extend healthspan and delay age-related health issues – even if the target is not the most important in a specific patient, the drug would still benefit that patient.

“When it comes to targets identification in chronic diseases, it is important to prioritize the targets that are implicated in age-associated diseases, implicated in more than one hallmark of aging, and safe,” said Zhavoronkov. “So that in addition to treating a disease, the drug would also treat aging – it is an off-target bonus.”

Now, an intelligent robotic fruit packing machine is able to automate the most labor-intensive job in the packhouse. For post-harvest operators, this robot could prove to be a game-changer in the industry.

The Aporo II robotic produce packaging machine by Globe Pac Technologies builds on the proven technology of the original Aporo I Produce Packer. First developed in 2018, the latest model can now accommodate twice the throughput, packing 240 fruit per minute, saving between two and four labor units per double packing belt.

According to Cameron McInness, Director of Jenkins Group, a New Zealand-based company which co-founded Global Pac Technologies with US-based Van Doren Sales Inc, Aporo II can be retrofitted across two packing belts instead of one, effectively doubling the throughput and the labor-saving that Aporo I could deliver.

Artificial Intelligence has been improving rapidly these past few years, and it’s now becoming obvious according to the top AI Scientists such as Yann LeCun that AI in 2030 will be almost unrecognizable compared to ones right now. They will be part of everyday lifes in the form of digital assistants and more. What other awesome abilities the best AI of the future will have, I’ll show you in this future predictions video.

TIMESTAMPS:
00:00 The first Human-Level AI
01:34 What are SSL World Models?
02:26 What is Self Supervised Learning?
05:24 Learning like Humans.
09:14 The Implications of Human AI
10:57 Last Words.

#ai #agi #technology

Mayo Clinic researchers have proposed a new model for mapping the symptoms of Alzheimer’s disease to brain anatomy. This model was developed by applying machine learning to patient brain imaging data. It uses the entire function of the brain rather than specific brain regions or networks to explain the relationship between brain anatomy and mental processing. The findings are reported in Nature Communications.

“This new model can advance our understanding of how the brain works and breaks down during aging and Alzheimer’s disease, providing new ways to monitor, prevent and treat disorders of the mind,” says David T. Jones, M.D., a Mayo Clinic neurologist and lead author of the study.

Alzheimer’s disease typically has been described as a protein-processing problem. The toxic proteins amyloid and tau deposit in areas of the brain, causing neuron failure that results in clinical symptoms such as , difficulty communicating and confusion.

Nvidia’s AI model is pretty impressive: a tool that quickly turns 2D snapshots into a 3D-rendered scene. The tool is called Nvidia Instant NeRF, referring to “neural radiance fields”.


Nvidia’s AI model is pretty impressive: a tool that quickly turns a collection of 2D snapshots into a 3D-rendered scene. The tool is called Instant NeRF, referring to “neural radiance fields”.

Known as inverse rendering, the process uses AI to approximate how light behaves in the real world, enabling researchers to reconstruct a 3D scene from a handful of 2D images taken at different angles. The Nvidia research team has developed an approach that accomplishes this task almost instantly making it one of the first models of its kind to combine ultra-fast neural network training and rapid rendering.

NeRFs use neural networks to represent and render realistic 3D scenes based on an input collection of 2D images.

Quantum computing startups are all the rage, but it’s unclear if they’ll be able to produce anything of use in the near future.


As a buzzword, quantum computing probably ranks only below AI in terms of hype. Large tech companies such as Alphabet, Amazon, and Microsoft now have substantial research and development efforts in quantum computing. A host of startups have sprung up as well, some boasting staggering valuations. IonQ, for example, was valued at $2 billion when it went public in October through a special-purpose acquisition company. Much of this commercial activity has happened with baffling speed over the past three years.

I am as pro-quantum-computing as one can be: I’ve published more than 100 technical papers on the subject, and many of my PhD students and postdoctoral fellows are now well-known quantum computing practitioners all over the world. But I’m disturbed by some of the quantum computing hype I see these days, particularly when it comes to claims about how it will be commercialized.