Toggle light / dark theme

New research has found that artificial intelligence (AI) analyzing medical scans can identify the race of patients with an astonishing degree of accuracy, while their human counterparts cannot. With the Food and Drug Administration (FDA) approving more algorithms for medical use, the researchers are concerned that AI could end up perpetuating racial biases. They are especially concerned that they could not figure out precisely how the machine-learning models were able to identify race, even from heavily corrupted and low-resolution images.

In the study, published on pre-print service Arxiv, an international team of doctors investigated how deep learning models can detect race from medical images. Using private and public chest scans and self-reported data on race and ethnicity, they first assessed how accurate the algorithms were, before investigating the mechanism.

“We hypothesized that if the model was able to identify a patient’s race, this would suggest the models had implicitly learned to recognize racial information despite not being directly trained for that task,” the team wrote in their research.

AI has finally come full circle.

A new suite of algorithms by Google Brain can now design computer chips —those specifically tailored for running AI software —that vastly outperform those designed by human experts. And the system works in just a few hours, dramatically slashing the weeks-or months-long process that normally gums up digital innovation.

At the heart of these robotic chip designers is a type of machine learning called deep reinforcement learning. This family of algorithms, loosely based on the human brain’s workings, has triumphed over its biological neural inspirations in games such as Chess, Go, and nearly the entire Atari catalog.

Why do so many people get frustrated with their “high-tech” prostheses? Though sophisticated robotics allow for prosthetic joints that can do everything a human can and more, the way we control robotic machines right now doesn’t allow us to operate them as naturally as you would a biological hand. Most robotic prostheses are controlled via metal pads on the skin that indirectly measure muscle action and then make some assumptions to determine what the person wants to do. Whil… See More.


We plan to use MM to provide natural control over prosthetic limbs by leveraging the human body’s proprioception. When you wiggle one of your fingers, your brain senses muscle lengths, speeds, and forces, and it uses these to figure out the position of that finger. This is called body awareness, or proprioception. When someone receives an amputation, if their muscle connections are maintained with what is called the “AMI technique,” their brain still perceives muscle flexion as it relates to joint movement, as if their limb was still present. In other words, they are sensing movement of a phantom limb. To give an amputee intuitive control over a robotic prosthesis, we plan to directly measure the muscle lengths and speeds involved in this phantom limb experience and have the robot copy what the brain expects, so that the brain experiences awareness of the robot’s current state. We see this technique as an important next step in the embodiment of the prosthetic limb (the feeling that it is truly part of one’s body).

Notably, the tracking of magnetic beads is minimally invasive, not requiring wires to run through the skin boundary or electronics to be implanted inside the body, and these magnetic beads can be made safe to implant by coating them in a biocompatible material. In addition, for muscles that are close to the skin, MM can be performed with very high accuracy. We found that by increasing the number of compass sensors we used, we could track live muscle lengths close to the surface of the skin with better than millimeter accuracy, and we found that our measurements were consistent to within the width of a human hair (about 37 thousandths of a millimeter).

The concept of tracking magnets through human tissue is not a new concept. This is the first time, however, that magnets have been tracked at sufficiently high speed for intuitive, reflexive control of a prosthesis. To reach this sufficiently high tracking speed, we had to improve upon traditional magnet tracking algorithms; these improvements are outlined in our previous work on tracking multiple magnets with low time delay, which also describes how we can account for the earth’s magnetic field during portable muscle-length tracking. This is also the first time that a pair of magnets has been used as a distance sensor. MM extends the capabilities we currently have with wired-ultrasound-crystal distance sensing (sonomicrometry, SM) and tantalum-bead-based distance sensing via multiple-perspective X-ray video (fluoromicrometry, FM), enabling us to now wirelessly sense distances in the body while a person moves about in a natural environment.

How true Eric Klien?


Something to look forward to: Solid-state batteries are still nebulous outside of the lab. Still, automakers are scrambling to be the first in the race to build the first electric car to take advantage of the added energy density and better safety when compared to lithium-ion designs. To that end, they’re investing in companies like QuantumScape, Solid Power, and Sakuu to develop manufacturing techniques that either build on existing approaches or rely on new additive manufacturing technology.

Earlier this year, an MIT study revealed that lithium-ion battery costs have fallen by more than 97 percent since its commercial introduction almost 30 years ago. Not only that, but industry watchers are optimistic that by2025lithium-ion battery manufacturing capacity will have tripled while the price per kilowatt-hour is expected to dip below $100.

Albert Einstein’s most famous equation, E = mc2, used in the theory of general relativity, has been used to create matter from light, scientists have said in a new study.

Researchers from New York’s Brookhaven National Laboratory used the Department of Energy’s Relativistic Heavy Ion Collider (RHIC), ordinarily used for nuclear physics research, to speed up two gold ions that are positively charged, in a loop.

🥲👍


Val Kilmer marked the release of his acclaimed documentary “Val” (now streaming on Amazon Prime Video) in a milestone way: He recreated his old speaking voice by feeding hours of recorded audio of himself into an artificial intelligence algorithm. Kilmer lost the ability to speak after undergoing throat cancer treatment in 2014. Kilmer’s team recently joined forces with software company Sonantic and “Val” distributor Amazon to “create an emotional and lifelike model of his old speaking voice” (via The Wrap).

“I’m grateful to the entire team at Sonantic who masterfully restored my voice in a way I’ve never imagined possible,” Val Kilmer said in a statement. “As human beings, the ability to communicate is the core of our existence and the side effects from throat cancer have made it difficult for others to understand me. The chance to narrate my story, in a voice that feels authentic and familiar, is an incredibly special gift.”

Quick – define common sense

Despite being both universal and essential to how humans understand the world around them and learn, common sense has defied a single precise definition. G. K. Chesterton, an English philosopher and theologian, famously wrote at the turn of the 20th century that “common sense is a wild thing, savage, and beyond rules.” Modern definitions today agree that, at minimum, it is a natural, rather than formally taught, human ability that allows people to navigate daily life.

Common sense is unusually broad and includes not only social abilities, like managing expectations and reasoning about other people’s emotions, but also a naive sense of physics, such as knowing that a heavy rock cannot be safely placed on a flimsy plastic table. Naive, because people know such things despite not consciously working through physics equations.

An artificial neural network (AI) designed by an international team involving UCL can translate raw data from brain activity, paving the way for new discoveries and a closer integration between technology and the brain.

The new method could accelerate discoveries of how brain activities relate to behaviors.

The study published today in eLife, co-led by the Kavli Institute for Systems Neuroscience in Trondheim and the Max Planck Institute for Human Cognitive and Brain Sciences Leipzig and funded by Wellcome and the European Research Council, shows that a , a specific type of deep learning , is able to decode many different behaviors and stimuli from a wide variety of brain regions in different species, including humans.