Toggle light / dark theme

How AI Is Transforming The Pharmaceutical Industry

AI-powered precision in medicine is helping to enhance the accuracy, efficiency, and personalization of medical treatments and healthcare interventions. Machine learning models analyze vast datasets, including genetic information, disease pathways, and past clinical outcomes, to predict how drugs will interact with biological targets. This not only speeds up the identification of promising compounds but also helps eliminate ineffective or potentially harmful options early in the research process.

Researchers are also turning to AI to improve how they evaluate a drug’s effectiveness across diverse patient populations. By analyzing real-world data, including electronic health records and biomarker responses, AI can help researchers identify patterns that predict how different groups may respond to a treatment. This level of precision helps refine dosing strategies, minimize side effects, and support the development of personalized medicine where treatments are tailored to an individual’s genetic and biological profile.

AI is having a positive impact on the pharmaceutical industry helping to reshape how drugs are discovered, tested, and brought to market. From accelerating drug development and optimizing research to enhancing clinical trials and manufacturing, AI is reducing costs, improving efficiency, and ultimately delivering better treatments to patients.

UBTech breakthrough sees humanoid robots work as a team in car factory

A Shenzhen-based humanoid robot maker said it has deployed “dozens of robots” in an electric vehicle (EV) factory where they work together on complicated tasks, offering a peek into the future of Made-in-China tech as artificial intelligence (AI) and robotics technologies are applied to empower manufacturing.

Hong Kong-listed UBTech Robotics said on Monday that it has completed a test to deploy dozens of its Walker S1 robots in the Zeekr EV factory in the Chinese port city of Ningbo for “multitask” and “multi site” operations.

According to photos and videos provided by UBTech, the human-shaped robots work as a team to complete tasks such as lifting heavy boxes and handling soft materials.

A Self-Balancing Exoskeleton Strides Toward Market

Many people who have spinal cord injuries also have dramatic tales of disaster: a diving accident, a car crash, a construction site catastrophe. But Chloë Angus has quite a different story. She was home one evening in 2015 when her right foot started tingling and gradually lost sensation. She managed to drive herself to the hospital, but over the course of the next few days she lost all sensation and control of both legs. The doctors found a benign tumor inside her spinal cord that couldn’t be removed, and told her she’d never walk again. But Angus, a jet-setting fashion designer, isn’t the type to take such news lying—or sitting—down.

Ten years later, at the CES tech trade show in January, Angus was showing off her dancing moves in a powered exoskeleton from the Canadian company Human in Motion Robotics. “Getting back to walking is pretty cool after spinal cord injury, but getting back to dancing is a game changer,” she told a crowd on the expo floor.

Meta Unveils Aria Gen 2: Discover the Future with Next-Gen AR and AI Glasses!

Meta has unveiled the next iteration of its sensor-packed research eyewear, the Aria Gen 2. This latest model follows the initial version introduced in 2020. The original glasses came equipped with a variety of sensors but lacked a display, and were not designed as either a prototype or a consumer product. Instead, they were exclusively meant for research to explore the types of data that future augmented reality (AR) glasses would need to gather from their surroundings to provide valuable functionality.

In their Project Aria initiative, Meta explored collecting egocentric data—information from the viewpoint of the user—to help train artificial intelligence systems. These systems could eventually comprehend the user’s environment and offer contextually appropriate support in daily activities. Notably, like its predecessor, the newly announced Aria Gen 2 does not feature a display.

Meta has highlighted several advancements in Aria Gen 2 compared to the first generation:

SoftBank Group In Talks To Borrow $16 Billion To Fund Artificial Intelligence

Softbank Group chief executive officer Masayoshi Son plans to borrow $16 billion to invest in artificial intelligence (AI), the company’s executives told banks last week, The Information tech news Web site reported on Saturday, citing people familiar with the matter.

The Japanese technology investor might borrow another $8 billion early next year, the report added. It was reported in January that Softbank is in talks to invest up to $25 billion in ChatGPT owner OpenAI, as the Japanese conglomerate continues to expand into the sector.

Softbank’s investment would be on top of the $15 billion it has already committed to Stargate, a private sector investment of up to $500 billion for AI infrastructure — funded by Softbank, OpenAI and Oracle Corp — to help the US stay ahead of China and other rivals in the global AI race.

The Information — a tech industry-focused publication headquartered in San Francisco — previously reported that Softbank was planning to invest a total of $40 billion into Stargate and OpenAI, and had begun talks to borrow up to $18.5 billion in financing, backed by its publicly-listed assets.

Separately, Arm Holdings PLC is set to sign a pact next week to establish a base in Malaysia, the Malaysian news agency Bernama reported on Friday, citing Malaysian Prime Minister Anwar Ibrahim. Anwar had a discussion with Arm chief executive officer Rene Haas on Friday, he told reporters in Putrajaya, Malaysia. Son also took part in the meeting, he said.

(https://open.substack.com/pub/remunerationlabs/p/softbank-gr…Share=true)


This would be on top of the $15 billion SoftBank has already committed to Stargate.

Brain creates ‘summaries’ while reading, unlike AI models that process full texts

Brain creates summaries while reading.


Unlike artificial language models, which process long texts as a whole, the human brain creates a “summary” while reading, helping it understand what comes next.

In recent years, (LLMs) like ChatGPT and Bard have revolutionized AI-driven text processing, enabling machines to generate text, translate languages, and analyze sentiment. These models are inspired by the human brain, but key differences remain.

A new Technion-Israel Institute of Technology study, published in Nature Communications, explores these differences by examining how the spoken texts. The research, led by Prof. Roi Reichart and Dr. Refael Tikochinski from the Faculty of Data and Decision Sciences. It was conducted as part of Dr. Tikochinski’s Ph.D., co-supervised by Prof. Reichart at Technion and Prof. Uri Hasson at Princeton University.

Researchers explore how to build shapeshifting, T-1000-style robots

Researchers have developed small robots that can work together as a collective that changes shape and even shifts between solid and “fluid-like” states — a concept that should be familiar to anyone still haunted by nightmares of the T-1000 robotic assassin from “Terminator 2.”

A team led by Matthew Devlin of UC Santa Barbara described this work in a paper recently published in Science, writing that the vision of “cohesive collectives of robotic units that can arrange into virtually any form with any physical properties … has long intrigued both science and fiction.”

Otger Campàs, a professor at Max Planck Institute of Molecular Biology and Genetics, told Ars Technica that the team was inspired by tissues in embryos to try and design robots with similar capabilities. These robots have motorized gears that allow them to move around within the collective, magnets so they can stay attached, and photodetectors that allow them to receive instructions from a flashlight with a polarization filter.

The 1,527 Horsepower Xiaomi SU7 Might Be The Real Deal On Track

Well, now the Ultra is officially been released A handful of Chinese media drivers have finally gotten behind the wheel for a review—both in the context of on-the-road driving and hammering it in more aggressive circumstances. Haoran Zhou, the former car PR person and F1 reporter, did a lead-follow of the SU7 Ultra on track.

I have to note that this is technically a step down from the full-race-ready track-prepped version that Xiaomi sent around the Nürburgring. The two cars still have the same 1,526 horsepower, but the lap-setting version has essentially a full carbon-fiber body, complete with huge brake ducts right into the side of the car. This version uses mostly the body of the standard SU7, although it does have a new aluminum hood.

Because of this, the SU7 Ultra is still as fully featured as the standard car. Zhou spent half of the video using Xiaomi’s driver assistance features. It appears to work as well as the standard SU7, but Zhou did remark that it was a little surreal to have a 1,500-horsepower car do some sort of autonomous driving. “I’m trying my best to find a positive use case for it,” he said, theorizing that these features would save wear and tear on the vehicle itself between track day use. “No normal human being would be driving like this in an SU7 Ultra,” he said.

/* */