Toggle light / dark theme

MIT and Dana-Farber Cancer Institute have teamed up to create an AI model that CRACKS the code of mysterious cancer origins! No more guesswork-this model predicts where tumors come from with up to 95% accuracy. For more insight, visit https://www.channelchek.com #Cancer #CancerBreakthrough #AIinMedicine #MedicalScience #BioTech #FutureOfHealthcare #FightCancer #HealthTech #CancerResearch #PrecisionMedicine

AI systems, such as GPT-4, can now learn and use human language, but they learn from astronomical amounts of language input—much more than children receive when learning how to understand and speak a language. The best AI systems train on text with a word count in the trillions, whereas children receive just millions per year.

Due to this enormous data gap, researchers have been skeptical that recent AI advances can tell us much about human learning and development. An ideal test for demonstrating a connection would involve training an AI model, not on massive data from the web, but on only the input that a single child receives. What would the model be able to learn then?

A team of New York University researchers ran this exact experiment. They trained a multimodal AI system through the eyes and ears of a single child, using headcam video recordings from when the child was 6 months and through their second birthday. They examined if the AI model could learn words and concepts present in a child’s everyday experience.

The amount of digital data available is greater than ever before, including in health care, where doctors’ notes are routinely entered into electronic health record systems. Manually reviewing, analyzing, and sorting all these notes requires a vast amount of time and effort, which is exactly why computer scientists have developed artificial intelligence and machine learning techniques to infer medical conditions, demographic traits, and other key information from this written text.

However, safety concerns limit the deployment of such models in practice. One key challenge is that the medical notes used to train and validate these models may differ greatly across hospitals, providers, and time. As a result, models trained at one hospital may not perform reliably when they’re deployed elsewhere.

Previous seminal works by Johns Hopkins University’s Suchi Saria—an associate professor of computer science at the Whiting School of Engineering—and researchers from other top institutions recognize these “dataset shifts” as a major concern in the safety of AI deployment.

Artificial intelligence is pivotal in advancing biotechnology and medical procedures, ranging from cancer diagnostics to the creation of new antibiotics. However, the ecological footprint of large-scale AI systems is substantial. For instance, training extensive language models like ChatGPT-3 requires several gigawatt-hours of energy—enough to power an average nuclear power plant at full capacity for several hours.

Prof. Mario Chemnitz and Dr. Bennet Fischer from Leibniz IPHT in Jena, in collaboration with their international team, have devised an innovative method to develop potentially energy-efficient computing systems that forego the need for extensive electronic infrastructure.

They harness the unique interactions of light waves within optical fibers to forge an advanced artificial learning system. Unlike traditional systems that rely on computer chips containing thousands of , their system uses a single optical fiber.

Professor Dario Floreano is a Swiss-Italian roboticist and engineer engaged in a bold research venture: the creation of edible robots and digestible electronics.

However counterintuitive it may seem, combining and robotic science could yield enormous benefits. These range from airlifts of food to advanced health monitoring.

For one, it reached over a million users in five days of its release, a mark unmatched by even the most historically popular apps like Facebook and Spotify. Additionally, ChatGPT has seen near-immediate adoption in business contexts, as organizations seek to gain efficiencies in content creation, code generation, and other functional tasks.

But as businesses rush to take advantage of AI, so too do attackers. One notable way in which they do so is through unethical or malicious LLM apps.

Unfortunately, a recent spate of these malicious apps has introduced risk into an organization’s AI journey. And, the associated risk is not easily addressed with a single policy or solution. To unlock the value of AI without opening doors to data loss, security leaders need to rethink how they approach broader visibility and control of corporate applications.

Biomimetic robots, which mimic the movements and biological functions of living organisms, are a fascinating area of research that can not only lead to more efficient robots but also serve as a platform for understanding muscle biology.

Among these, biohybrid actuators, made up of soft materials and muscular cells that can replicate the forces of actual muscles, have the potential to achieve life-like movements and functions, including self-healing, , and high power-to-weight ratio, which have been difficult for traditional bulky robots that require heavy energy sources.

One way to achieve these life-like movements is to arrange in biohybrid actuators in an anisotropic manner. This involves aligning them in a specific pattern where they are oriented in different directions, like what is found in living organisms.