The Oxford University professor posits the emergence of ‘a new species’ stemming from algorithms.

Researchers at UT Southwestern Medical Center have developed a novel artificial intelligence (AI) model that analyzes the spatial arrangement of cells in tissue samples. This innovative approach, detailed in Nature Communications, has accurately predicted outcomes for cancer patients, marking a significant advancement in utilizing AI for cancer prognosis and personalized treatment strategies.
“Cell spatial organization is like a complex jigsaw puzzle where each cell serves as a unique piece, fitting together meticulously to form a cohesive tissue or organ structure. This research showcases the remarkable ability of AI to grasp these intricate spatial relationships among cells within tissues, extracting subtle information previously beyond human comprehension while predicting patient outcomes,” said study leader Guanghua Xiao, Ph.D., Professor in the Peter O’Donnell Jr. School of Public Health, Biomedical Engineering, and the Lyda Hill Department of Bioinformatics at UT Southwestern. Dr. Xiao is a member of the Harold C. Simmons Comprehensive Cancer Center at UTSW.
Tissue samples are routinely collected from patients and placed on slides for interpretation by pathologists, who analyze them to make diagnoses. However, Dr. Xiao explained, this process is time-consuming, and interpretations can vary among pathologists. In addition, the human brain can miss subtle features present in pathology images that might provide important clues to a patient’s condition.
Conference is exploring burgeoning connections between the two fields.
Traditionally, mathematicians jot down their formulas using paper and pencil, seeking out what they call pure and elegant solutions. In the 1970s, they hesitantly began turning to computers to assist with some of their problems. Decades later, computers are often used to crack the hardest math puzzles. Now, in a similar vein, some mathematicians are turning to machine learning tools to aid in their numerical pursuits.
Embracing Machine Learning in Mathematics.
Summary: Researchers created a revolutionary system that can non-invasively convert silent thoughts into text, offering new communication possibilities for people with speech impairments due to illnesses or injuries.
The technology uses a wearable EEG cap to record brain activity and an AI model named DeWave to decode these signals into language. This portable system surpasses previous methods that required invasive surgery or cumbersome MRI scanning, achieving state-of-the-art EEG translation performance.
It shows promise in enhancing human-machine interactions and in aiding those who cannot speak, with potential applications in controlling devices like bionic arms or robots.
It’s easy to get robotic process automation (RPA), machine learning (ML), and artificial intelligence (AI) mixed up—especially when people use them interchan…
Lean Co-pilot for LLM-human collaboration to write formal mathematical proofs that are 100% accurate.
Top right: LeanDojo extracts proofs in Lean into datasets for training machine learning models. It also enables the trained model to prove theorems by interacting with Lean’s proof environment.
Top left: The proof tree of a Lean theorem ∀n∈N, gcd n n = n, where gcd is the greatest common divisor. When proving the theorem, we start from the original theorem as the initial state (the root) and repeatedly apply tactics (the edges) to decompose states into simpler sub-states, until all states are solved (the leaf nodes). Tactics may rely on premises such as mod_self and gcd_zero_left defined in a large math library. E.g., mod_self is an existing theorem ∀n∈N, n % n = 0 used in the proof to simplify the goal.