Toggle light / dark theme

AI could Predict Breast Cancer risk via ‘Zombie cells’

Women worldwide could see better treatment with new AI technology, which enables better detection of damaged cells and more precisely predicts the risk of getting breast cancer, shows new research from the University of Copenhagen.

Breast cancer is one of the most common types of cancer. In 2022, the disease caused 670,000 deaths worldwide. Now, a new study from the University of Copenhagen shows that AI can help women with improved treatment by scanning for irregular-looking cells to give better risk assessment.

The study, published in The Lancet Digital Health, found that the AI technology was far better at predicting the risk of cancer than current clinical benchmarks for breast cancer risk assessment.

DeepSeek AI Releases Janus: A 1.3B Multimodal Model with Image Generation Capabilities

Multimodal AI models are powerful tools capable of both understanding and generating visual content. However, existing approaches often use a single visual encoder for both tasks, which leads to suboptimal performance due to the fundamentally different requirements of understanding and generation. Understanding requires high-level semantic abstraction, while generation focuses on local details and global consistency. This mismatch results in conflicts that limit the overall efficiency and accuracy of the model.

Researchers from DeepSeek-AI, the University of Hong Kong, and Peking University propose Janus, a novel autoregressive framework that unifies multimodal understanding and generation by employing two distinct visual encoding pathways. Unlike prior models that use a single encoder, Janus introduces a specialized pathway for each task, both of which are processed through a unified transformer. This unique design alleviates conflicts inherent in prior models and provides enhanced flexibility, enabling different encoding methods that best suit each modality. The name “Janus” aptly represents this duality, much like the Roman god, with two faces representing transitions and coexistence.

The architecture of Janus consists of two main components: an Understanding Encoder and a Generation Encoder, each tasked with handling multimodal inputs differently. For multimodal understanding, Janus uses a high-dimensional semantic feature extraction approach through SigLIP, transforming the features into a sequence compatible with the language model. For visual generation, Janus utilizes a VQ tokenizer that converts visual data into discrete representations, enabling detailed image synthesis. Both tasks are processed by a shared transformer, enabling the model to operate in an autoregressive fashion. This approach allows the model to decouple the requirements of each visual task, simplifying implementation and improving scalability.

Top “Reasoning” AI Models Can be Brought to Their Knees With an Extremely Simple Trick

A team of Apple researchers has found that advanced AI models’ alleged ability to “reason” isn’t all it’s cracked up to be.

“Reasoning” is a word that’s thrown around a lot in the AI industry these days, especially when it comes to marketing the advancements of frontier AI language models. OpenAI, for example, recently dropped its “Strawberry” model, which the company billed as its next-level large language model (LLM) capable of advanced reasoning. (That model has since been renamed just “o1.”)

But marketing aside, there’s no agreed-upon industrywide definition for what reasoning exactly means. Like other AI industry terms, for example, “consciousness” or “intelligence,” reasoning is a slippery, ephemeral concept; as it stands, AI reasoning can be chalked up to an LLM’s ability to “think” its way through queries and complex problems in a way that resembles human problem-solving patterns.

TSMC Hikes Revenue Outlook in Show of Confidence in AI Boom

The world’s largest maker of advanced chips has been one of the biggest beneficiaries of a global race to develop artificial intelligence.


Taiwan Semiconductor Manufacturing Co. shares hit a record high after the chipmaker topped quarterly estimates and raised its target for 2024 revenue growth, allaying concerns about global chip demand and the sustainability of an AI hardware boom.

The Future of Lunar Resource Extraction: Teleoperation and Simulation

“One option could be to have astronauts use this simulation to prepare for upcoming lunar exploration missions,” said Joe Louca.


How will future missions to the Moon help extract valuable resources that can be used for scientific research or lunar settlement infrastructure? This is what a recent study being presented this week at the IROS 2024 (IEEE/RSJ International Conference on Intelligent Robots and Systems) hopes to address as a team of researchers from the University of Bristol investigated how a combination of virtual simulations and robotic commands could help enhance teleoperated robotic exploration on the lunar surface on future missions.

For the study, the researchers used a method called model-mediated teleoperation (MMT) to create simulated regolith and send commands to a robot that carried out the task. In the end, the researchers found effectiveness and trustworthiness of the simulated regolith to the robot conducting the tasks was 100 percent and 92.5 percent, respectively. The reason teleoperated robots are essential is due to the time lag between the Earth and the Moon and extracting resources from the lunar surface, known as in-situ resource utilization (ISRU), is also being deemed an essential task at developing lunar infrastructure for future astronauts.

AI misinformation detectors can’t save us from tyranny—at least not yet

AI-powered misinformation detectors—artificial intelligence tools that identify false or inaccurate online content—have emerged as a potential intervention for helping internet users understand the veracity of the content they view. However, the algorithms used to create these detectors are experimental and largely untested at the scale necessary to be effective on a social media platform.

True Anomaly Taps Firefly Aerospace to Launch Jackal Autonomous Orbital Vehicle for U.S. Space Force VICTUS HAZE Tactically Responsive Space Mission

Ready to set another industry record!


New multi-launch agreement between Firefly Aerospace and True Anomaly includes three Alpha missions to provide rapid launch capabilities for Tactically Responsive Space mission sets

Cedar Park, Texas, October 17, 2024Firefly Aerospace, Inc., an end-to-end space transportation company, and space defense technology company True Anomaly, Inc., today announced a multi-launch agreement for three responsive launch missions aboard Firefly’s Alpha rocket. The first mission will deploy the True Anomaly Jackal Autonomous Orbital Vehicle (AOV) for the U.S. Space Force Space Systems Command’s VICTUS HAZE Tactically Responsive Space (TacRS) mission targeted for 2025. The two additional missions are available for execution between 2025 and 2027.

“VICTUS HAZE is an exemplar for how strong partnerships between the U.S. government and an exceptional industry team can create asymmetric capabilities at record speeds,” said Even Rogers, CEO of True Anomaly. “Firefly Aerospace has consistently demonstrated innovation and agility in the rapidly evolving landscape of responsive space launch logistics and space vehicle deployment. We are confident that they will build on their track record from VICTUS NOX, enabling True Anomaly to deploy the Jackal Autonomous Orbital Vehicle for VITCUS HAZE. The procurement of additional rapid, responsive launch capacity from Firefly beyond VICTUS HAZE, paired with True Anomaly’s rapid manufacturing capability will enable standing capacity for the U.S. National Security Space enterprise to rapidly respond to mission requirements in Low Earth Orbit and Medium Earth Orbit.”

Sotheby’s to auction painting by humanoid robot in a futuristic first— and it’s expected to fetch up to $180K

She’s a real Vincent van Go-bot.

In a first for Sotheby’s, the famed auction house will sell a painting made by a humanoid robot — and it’s expected to fetch up to a whopping $180,000.

The robot, known as Ai-Da, created the painting of renowned mathematician and computer scientist Alan Turing entitled “AI God” with its own hydraulically powered hands.

/* */