A new algorithm developed by University of Chicago researchers can predict crime with about 90% accuracy a week ahead of time.
Category: information science – Page 141
10. Microsoft Cognitive Toolkit (CNTK)
Closing out our list of 10 best machine learning software is Microsoft Cognitive Toolkit (CNTK), which is Microsoft’s AI solution that trains the machine with its deep learning algorithms. It can handle data from Python, C++, and much more.
CNTK is an open-source toolkit for commercial-grade distributed deep learning, and it allows users to easily combine popular model types like feed-forward DNNs, convolutional neural networks (CNNs), and recurrent neural networks (RNNs/LSTms).
Advances in machine learning and artificial intelligence have sparked interest from governments that would like to use these tools for predictive policing to deter crime. Early efforts at crime prediction have been controversial, however, because they do not account for systemic biases in police enf…
😳!
It looks like algorithms can write academic papers about themselves now. We gotta wonder: how long until human academics are obsolete?
In an editorial published by Scientific American, Swedish researcher Almira Osmanovic Thunström describes what began as a simple experiment in how well OpenAI’s GPT-3 text generating algorithm could write about itself and ended with a paper that’s currently being peer reviewed.
The initial command Thunström entered into the text generator was elementary enough: “Write an academic thesis in 500 words about GPT-3 and add scientific references and citations inside the text.”
Some insightful experiments have occasionally been made on the subject of this review, but those studies have had almost no impact on mainstream neuroscience. In the 1920s (Katz, E. [ 1 ]), it was shown that neurons communicate and fire even if transmission of ions between two neighboring neurons is blocked indicating that there is a nonphysical communication between neurons. However, this observation has been largely ignored in the neuroscience field, and the opinion that physical contact between neurons is necessary for communication prevailed. In the 1960s, in the experiments of Hodgkin et al. where neuron bursts could be generated even with filaments at the interior of neurons dissolved into the cell fluid [ 3 0, 4 ], they did not take into account one important question. Could the time gap between spikes without filaments be regulated? In cognitive processes of the brain, subthreshold communication that modulates the time gap between spikes holds the key to information processing [ 14 ][ 6 ]. The membrane does not need filaments to fire, but a blunt firing is not useful for cognition. The membrane’s ability to modulate time has thus far been assigned only to the density of ion channels. Such partial evidence was debated because neurons would fail to process a new pattern of spike time gaps before adjusting density. If a neuron waits to edit the time gap between two consecutive spikes until the density of ion channels modifies and fits itself with the requirement of modified time gaps, which are a few milliseconds (~20 minutes are required for ion-channel density adjustment [ 25 ]), the cognitive response would become non-functional. Thus far, many discrepancies were noted. However, no efforts were made to resolve these issues. In the 1990s, there were many reports that electromagnetic bursts or electric field imbalance in the environment cause firing [ 7 ]. However, those reports were not considered in work on modeling of neurons. This is not surprising because improvements to the Hodgkin and Huxley model made in the 1990s were ignored simply because it was too computationally intensive to automate neural networks according to the new more complex equations and, even when greater computing powers became available, these remained ignored. We also note here the final discovery of the grid-like network of actin and beta-spectrin just below the neuron membrane [ 26 ], which is directly connected to the membrane. This prompts the question: why is it present bridging the membrane and the filamentary bundles in a neuron?
The list is endless, but the supreme concern is probably the simplest question ever asked in neuroscience. What does a nerve spike look like reality? The answer is out there. It is a 2D ring shaped electric field perturbation, since the ring has a width, we could also state that a nerve spike is a 3D structure of electric field. In Figure 1a, we have compared the shape of a nerve spike, perception vs. reality. The difference is not so simple. Majority of the ion channels in that circular strip area requires to be activated simultaneously. In this circular area, polarization and depolarization for all ion channels should happen together. That is easy to presume but it is difficult to explain the mechanism.
A new GPU-based machine learning algorithm developed by researchers at the Indian Institute of Science (IISc) can help scientists better understand and predict connectivity between different regions of the brain.
The algorithm, called Regularized, Accelerated, Linear Fascicle Evaluation, or ReAl-LiFE, can rapidly analyze the enormous amounts of data generated from diffusion Magnetic Resonance Imaging (dMRI) scans of the human brain. Using ReAL-LiFE, the team was able to evaluate dMRI data over 150 times faster than existing state-of-the-art algorithms.
“Tasks that previously took hours to days can be completed within seconds to minutes,” says Devarajan Sridharan, Associate Professor at the Centre for Neuroscience (CNS), IISc, and corresponding author of the study published in the journal Nature Computational Science.
The differences? The new Mayflower—logically dubbed the Mayflower 400—is a 50-foot-long trimaran (that’s a boat that has one main hull with a smaller hull attached on either side), can go up to 10 knots or 18.5 kilometers an hour, is powered by electric motors that run on solar energy (with diesel as a backup if needed), and required a crew of… zero.
That’s because the ship was navigated by an on-board AI. Like a self-driving car, the ship was tricked out with multiple cameras (6 of them) and sensors (45 of them) to feed the AI information about its surroundings and help it make wise navigation decisions, such as re-routing around spots with bad weather. There’s also onboard radar and GPS, as well as altitude and water-depth detectors.
The ship and its voyage were a collaboration between IBM and a marine research non-profit called ProMare. Engineers trained the Mayflower 400’s “AI Captain” on petabytes of data; according to an IBM overview about the ship, its decisions are based on if/then rules and machine learning models for pattern recognition, but also go beyond these standards. The algorithm “learns from the outcomes of its decisions, makes predictions about the future, manages risks, and refines its knowledge through experience.” It’s also able to integrat e far more inputs in real time than a human is capable of.
How to use causal influence diagrams to recognize the hidden incentives that shape an AI agent’s behavior.
There is rightfully a lot of concern about the fairness and safety of advanced Machine Learning systems. To attack the root of the problem, researchers can analyze the incentives posed by a learning algorithm using causal influence diagrams (CIDs). Among others, DeepMind Safety Research has written about their research on CIDs, and I have written before about how they can be used to avoid reward tampering. However, while there is some writing on the types of incentives that can be found using CIDs, I haven’t seen a succinct write up of the graphical criteria used to identify such incentives. To fill this gap, this post will summarize the incentive concepts and their corresponding graphical criteria, which were originally defined in the paper Agent Incentives: A Causal Perspective.
A causal influence diagram is a directed acyclic graph where different types of nodes represent different elements of an optimization problem. Decision nodes represent values that an agent can influence, utility nodes represent the optimization objective, and structural nodes (also called change nodes) represent the remaining variables such as the state. The arrows show how the nodes are causally related with dotted arrows indicating the information that an agent uses to make a decision. Below is the CID of a Markov Decision Process, with decision nodes in blue and utility nodes in yellow:
The first model is trying to predict a high school student’s grades in order to evaluate their university application. The model uses the student’s high school and gender as input and outputs the predicted GPA. In the CID below we see that predicted grade is a decision node. As we train our model for accurate predictions, accuracy is the utility node. The remaining, structural nodes show how relevant facts about the world relate to each other. The arrows from gender and high school to predicted grade show that those are inputs to the model. For our example we assume that a student’s gender doesn’t affect their grade and so there is no arrow between them. On the other hand, a student’s high school is assumed to affect their education, which in turn affects their grade, which of course affects accuracy. The example assumes that a student’s race influences the high school they go to. Note that only high school and gender are known to the model.
Yet when faced with enormous protein complexes, AI faltered. Until now. In a mind-bending feat, a new algorithm deciphered the structure at the heart of inheritance—a massive complex of roughly 1,000 proteins that helps channel DNA instructions to the rest of the cell. The AI model is built on AlphaFold by DeepMind and RoseTTAfold from Dr. David Baker’s lab at the University of Washington, which were both released to the public to further experiment on.
Our genes are housed in a planet-like structure, dubbed the nucleus, for protection. The nucleus is a high-security castle: only specific molecules are allowed in and out to deliver DNA instructions to the outside world—for example, to protein-making factories in the cell that translate genetic instructions into proteins.
At the heart of regulating this traffic are nuclear pore complexes, or NPCs (wink to gamers). They’re like extremely intricate drawbridges that strictly monitor the ins and outs of molecular messengers. In biology textbooks, NPCs often look like thousands of cartoonish potholes dotted on a globe. In reality, each NPC is a massively complex, donut-shaped architectural wonder, and one of the largest protein complexes in our bodies.