Toggle light / dark theme

Siemens Partners with Roboze to Automate 3D Printing

Siemens and Roboze have announced that they are collaborating to develop workflows dedicated to the industrialization of 3D printing. This includes an emphasis on expanding the use of the technology in energy, mobility, and aerospace. Though the exact nature of the agreement isn’t fully elucidated, it marks a significant shift for both firms.

Siemens is the largest industrial manufacturer in Europe, with a storied history spanning nearly two centuries and annual revenues totaling €62.3 billion, as of 2021. In contrast, Roboze is a comparatively new firm, established in Italy in 2013. The company has since built itself up into a leader in industrial-grade material extrusion 3D printers, earning such customers as Ducati, GE, and the U.S. Army.

The partners do not exactly clarify their intent except to say that they will work together to “increase the productivity, competitiveness and efficiency of manufacturers that have embarked on the path to the future of industry.” They do mention focusing on “digitalization and automation projects”.

Rewards in Reinforcement Learning Make Machines Behave Like Humans

Reward maximisation is one strategy that works for reinforcement learning to achieve general artificial intelligence. However, deep reinforcement learning algorithms shouldn’t depend on reward maximisation alone.


Identifying dual-purpose therapeutic targets implicated in aging and disease will extend healthspan and delay age-related health issues.

Insilico identifies therapeutic targets implicated in aging using AI and hallmarks of aging framework

AI is all that matters now, and reaching Agi before 2030 is all that matters for this decade.


A substantial percentage of the human clinical trials, including those evaluating investigational anti-aging drugs, fail in Phase II, a phase where the efficacy of the drug is tested. This poor success is in part due to inadequate target choice and the inability to identify a group of patients who will most likely respond to specific agents. This challenge is further complicated by the differences in the biological age of the patients, as the importance of therapeutic targets varies between the age groups. Unfortunately, most targets are discovered without considering patients’ age and being tested in a relatively younger population (average age in phase I is 24). Hence, identifying potential targets that are implicated in multiple age-associated diseases, and also play a role in the basic biology of aging, may have substantial benefits.

Identifying dual-purpose targets that are implicated in aging and disease at the same time will extend healthspan and delay age-related health issues – even if the target is not the most important in a specific patient, the drug would still benefit that patient.

“When it comes to targets identification in chronic diseases, it is important to prioritize the targets that are implicated in age-associated diseases, implicated in more than one hallmark of aging, and safe,” said Zhavoronkov. “So that in addition to treating a disease, the drug would also treat aging – it is an off-target bonus.”

Fruit packing robot dramatically cuts packhouse labor needs

Now, an intelligent robotic fruit packing machine is able to automate the most labor-intensive job in the packhouse. For post-harvest operators, this robot could prove to be a game-changer in the industry.

The Aporo II robotic produce packaging machine by Globe Pac Technologies builds on the proven technology of the original Aporo I Produce Packer. First developed in 2018, the latest model can now accommodate twice the throughput, packing 240 fruit per minute, saving between two and four labor units per double packing belt.

According to Cameron McInness, Director of Jenkins Group, a New Zealand-based company which co-founded Global Pac Technologies with US-based Van Doren Sales Inc, Aporo II can be retrofitted across two packing belts instead of one, effectively doubling the throughput and the labor-saving that Aporo I could deliver.

Will there be Human-Level Artificial Intelligence by 2030?

Artificial Intelligence has been improving rapidly these past few years, and it’s now becoming obvious according to the top AI Scientists such as Yann LeCun that AI in 2030 will be almost unrecognizable compared to ones right now. They will be part of everyday lifes in the form of digital assistants and more. What other awesome abilities the best AI of the future will have, I’ll show you in this future predictions video.

TIMESTAMPS:
00:00 The first Human-Level AI
01:34 What are SSL World Models?
02:26 What is Self Supervised Learning?
05:24 Learning like Humans.
09:14 The Implications of Human AI
10:57 Last Words.

#ai #agi #technology

New computational model proposed for Alzheimer’s disease

Mayo Clinic researchers have proposed a new model for mapping the symptoms of Alzheimer’s disease to brain anatomy. This model was developed by applying machine learning to patient brain imaging data. It uses the entire function of the brain rather than specific brain regions or networks to explain the relationship between brain anatomy and mental processing. The findings are reported in Nature Communications.

“This new model can advance our understanding of how the brain works and breaks down during aging and Alzheimer’s disease, providing new ways to monitor, prevent and treat disorders of the mind,” says David T. Jones, M.D., a Mayo Clinic neurologist and lead author of the study.

Alzheimer’s disease typically has been described as a protein-processing problem. The toxic proteins amyloid and tau deposit in areas of the brain, causing neuron failure that results in clinical symptoms such as , difficulty communicating and confusion.

Nvidia Instant NeRF: A Tool that Turns 2D Snapshots into a 3D-Rendered Scene

Nvidia’s AI model is pretty impressive: a tool that quickly turns 2D snapshots into a 3D-rendered scene. The tool is called Nvidia Instant NeRF, referring to “neural radiance fields”.


Nvidia’s AI model is pretty impressive: a tool that quickly turns a collection of 2D snapshots into a 3D-rendered scene. The tool is called Instant NeRF, referring to “neural radiance fields”.

Known as inverse rendering, the process uses AI to approximate how light behaves in the real world, enabling researchers to reconstruct a 3D scene from a handful of 2D images taken at different angles. The Nvidia research team has developed an approach that accomplishes this task almost instantly making it one of the first models of its kind to combine ultra-fast neural network training and rapid rendering.

NeRFs use neural networks to represent and render realistic 3D scenes based on an input collection of 2D images.

/* */