Toggle light / dark theme

Unitree is a publicly traded robot company with about $5 billion in market value. They have sped up their humanoid robot to human jogging speed of 3.3 meters per second. This is about 7.5 miles per hour. It would not tire so it would take 50 minutes to cover a 10 kilometer race with enough battery power.

It can lift boxes and climb and descend stairs. It was able to jump vertically.

They have hand attachments that currently do not have finger and grasping motions.

Researchers have developed a computer “worm” that can spread from one computer to another using generative AI, a warning sign that the tech could be used to develop dangerous malware in the near future — if it hasn’t already.

As Wired reports, the worm can attack AI-powered email assistants to obtain sensitive data from emails and blast out spam messages that infect other systems.

“It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn’t been seen before,” Cornell Tech researcher Ben Nassi, coauthor of a yet-to-be-peer-reviewed paper about the work, told Wired.

MIT and Dana-Farber Cancer Institute have teamed up to create an AI model that CRACKS the code of mysterious cancer origins! No more guesswork-this model predicts where tumors come from with up to 95% accuracy. For more insight, visit https://www.channelchek.com #Cancer #CancerBreakthrough #AIinMedicine #MedicalScience #BioTech #FutureOfHealthcare #FightCancer #HealthTech #CancerResearch #PrecisionMedicine

AI systems, such as GPT-4, can now learn and use human language, but they learn from astronomical amounts of language input—much more than children receive when learning how to understand and speak a language. The best AI systems train on text with a word count in the trillions, whereas children receive just millions per year.

Due to this enormous data gap, researchers have been skeptical that recent AI advances can tell us much about human learning and development. An ideal test for demonstrating a connection would involve training an AI model, not on massive data from the web, but on only the input that a single child receives. What would the model be able to learn then?

A team of New York University researchers ran this exact experiment. They trained a multimodal AI system through the eyes and ears of a single child, using headcam video recordings from when the child was 6 months and through their second birthday. They examined if the AI model could learn words and concepts present in a child’s everyday experience.

The amount of digital data available is greater than ever before, including in health care, where doctors’ notes are routinely entered into electronic health record systems. Manually reviewing, analyzing, and sorting all these notes requires a vast amount of time and effort, which is exactly why computer scientists have developed artificial intelligence and machine learning techniques to infer medical conditions, demographic traits, and other key information from this written text.

However, safety concerns limit the deployment of such models in practice. One key challenge is that the medical notes used to train and validate these models may differ greatly across hospitals, providers, and time. As a result, models trained at one hospital may not perform reliably when they’re deployed elsewhere.

Previous seminal works by Johns Hopkins University’s Suchi Saria—an associate professor of computer science at the Whiting School of Engineering—and researchers from other top institutions recognize these “dataset shifts” as a major concern in the safety of AI deployment.

Artificial intelligence is pivotal in advancing biotechnology and medical procedures, ranging from cancer diagnostics to the creation of new antibiotics. However, the ecological footprint of large-scale AI systems is substantial. For instance, training extensive language models like ChatGPT-3 requires several gigawatt-hours of energy—enough to power an average nuclear power plant at full capacity for several hours.

Prof. Mario Chemnitz and Dr. Bennet Fischer from Leibniz IPHT in Jena, in collaboration with their international team, have devised an innovative method to develop potentially energy-efficient computing systems that forego the need for extensive electronic infrastructure.

They harness the unique interactions of light waves within optical fibers to forge an advanced artificial learning system. Unlike traditional systems that rely on computer chips containing thousands of , their system uses a single optical fiber.

Professor Dario Floreano is a Swiss-Italian roboticist and engineer engaged in a bold research venture: the creation of edible robots and digestible electronics.

However counterintuitive it may seem, combining and robotic science could yield enormous benefits. These range from airlifts of food to advanced health monitoring.

For one, it reached over a million users in five days of its release, a mark unmatched by even the most historically popular apps like Facebook and Spotify. Additionally, ChatGPT has seen near-immediate adoption in business contexts, as organizations seek to gain efficiencies in content creation, code generation, and other functional tasks.

But as businesses rush to take advantage of AI, so too do attackers. One notable way in which they do so is through unethical or malicious LLM apps.

Unfortunately, a recent spate of these malicious apps has introduced risk into an organization’s AI journey. And, the associated risk is not easily addressed with a single policy or solution. To unlock the value of AI without opening doors to data loss, security leaders need to rethink how they approach broader visibility and control of corporate applications.