There are competing notions of fairness — and sometimes they’re incompatible, as facial recognition and lending algorithms show.
There are competing notions of fairness — and sometimes they’re incompatible, as facial recognition and lending algorithms show.
A new deep learning algorithm is able to quantify arousal and awareness in humans at the same time.
#consc… See more.
Summary: A new deep learning algorithm is able to quantify arousal and awareness in humans at the same time.
Source: CORDIS
New research supported by the EU-funded HBP SGA3 and DoCMA projects is giving scientists new insight into human consciousness.
Led by Korea University and projects’ partner University of Liège (Belgium), the research team has developed an explainable consciousness indicator (ECI) to explore different components of consciousness.
Amid the chaotic chains of events that ensue when protons smash together at the Large Hadron Collider in Europe, one particle has popped up that appears to go to pieces in a peculiar way.
All eyes are on the B meson, a yoked pair of quark particles. Having caught whiffs of unexpected B meson behavior before, researchers with the Large Hadron Collider beauty experiment (LHCb) have spent years documenting rare collision events featuring the particles, in hopes of conclusively proving that some novel fundamental particle or effect is meddling with them.
In their latest analysis, first presented at a seminar in March, the LHCb physicists found that several measurements involving the decay of B mesons conflict slightly with the predictions of the Standard Model of particle physics — the reigning set of equations describing the subatomic world. Taken alone, each oddity looks like a statistical fluctuation, and they may all evaporate with additional data, as has happened before. But their collective drift suggests that the aberrations may be breadcrumbs leading beyond the Standard Model to a more complete theory.
Machine learning and machine learning algorithms are finding new applications in game building. Machine learning NPCs with machine learning processors have made it possible to have a virtual player.
Study reveals the different ways the brain parses information through interactions of waves of neural activity.
This could lead to a truly random number generator making things much more secure.
Random numbers are crucial for computing, but our current algorithms aren’t truly random. Researchers at Brown University have now found a way to tap into the fluctuations of quasiparticles to generate millions of truly random numbers per second.
Random number generators are key parts of computer software, but technically they don’t quite live up to their name. Algorithms that generate these numbers are still deterministic, meaning that anyone with enough information about how it works could potentially find patterns and predict the numbers produced. These pseudo-random numbers suffice for low stakes uses like gaming, but for scientific simulations or cybersecurity, truly random numbers are important.
In recent years scientists have turned to the strange world of quantum physics for true randomization, using photons to generate strings of random ones and zeroes or tapping into the quantum vibrations of diamond. And for the new study, the Brown scientists tried something similar.
Labeling data can be a chore. It’s the main source of sustenance for computer-vision models; without it, they’d have a lot of difficulty identifying objects, people, and other important image characteristics. Yet producing just an hour of tagged and labeled data can take a whopping 800 hours of human time. Our high-fidelity understanding of the world develops as machines can better perceive and interact with our surroundings. But they need more help.
Scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), Microsoft, and Cornell University have attempted to solve this problem plaguing vision models by creating “STEGO,” an algorithm that can jointly discover and segment objects without any human labels at all, down to the pixel.
STEGO learns something called “semantic segmentation”—fancy speak for the process of assigning a label to every pixel in an image. Semantic segmentation is an important skill for today’s computer-vision systems because images can be cluttered with objects. Even more challenging is that these objects don’t always fit into literal boxes; algorithms tend to work better for discrete “things” like people and cars as opposed to “stuff” like vegetation, sky, and mashed potatoes. A previous system might simply perceive a nuanced scene of a dog playing in the park as just a dog, but by assigning every pixel of the image a label, STEGO can break the image into its main ingredients: a dog, sky, grass, and its owner.
For centuries, mathematicians have tried to prove that Euler’s fluid equations can produce nonsensical answers. A new approach to machine learning has researchers betting that “blowup” is near.
Back in 1993, AI pioneer Jürgen Schmidhuber published the paperA Self-Referential Weight Matrix, which he described as a “thought experiment… intended to make a step towards self-referential machine learning by showing the theoretical possibility of self-referential neural networks whose weight matrices (WMs) can learn to implement and improve their own weight change algorithm.” A lack of subsequent practical studies in this area had however left this potentially impactful meta-learning ability unrealized — until now.
In the new paper A Modern Self-Referential Weight Matrix That Learns to Modify Itself, a research team from The Swiss AI Lab, IDSIA, University of Lugano (USI) & SUPSI, and King Abdullah University of Science and Technology (KAUST) presents a scalable self-referential WM (SRWM) that leverages outer products and the delta update rule to update and improve itself, achieving both practical applicability and impressive performance in game environments.
The proposed model is built upon fast weight programmers (FWPs), a scalable and effective method dating back to the ‘90s that can learn to memorize past data and compute fast weight changes via programming instructions that are additive outer products of self-invented activation patterns, aka keys and values for self-attention. In light of their connection to linear variants of today’s popular transformer architectures, FWPs are now witnessing a revival. Recent studies have advanced conventional FWPs with improved elementary programming instructions or update rules invoked by their slow neural net to reprogram the fast neural net, an approach that has been dubbed the “delta update rule.”
Neuroscientists from St. Petersburg University, led by Professor Allan V. Kalueff, in collaboration with an international team of IT specialists, have become the first in the world to apply the artificial intelligence (AI) algorithms to phenotype zebrafish psychoactive drug responses. They managed to train AI to determine—by fish response—which psychotropic agents were used in the experiment.
The research findings are published in the journal Progress in Neuro-Psychopharmacology and Biological Psychiatry.
The zebrafish (Danio rerio) is a freshwater bony fish that is presently the second-most (after mice) used model organism in biomedical research. The advantages for utilizing zebrafish as a model biological system are numerous, including low maintenance costs and high genetic and physiological similarity to humans. Zebrafish share 70% of genes with us. Furthermore, the simplicity of the zebrafish nervous system enables researchers to achieve more explicit and accurate results, as compared to studies with more complex organisms.