With some reports predicting the precision agriculture market will reach $12.9 billion by 2027, there is an increasing need to develop sophisticated data-analysis solutions that can guide management decisions in real time. A new study from an interdisciplinary research group at University of Illinois offers a promising approach to efficiently and accurately process precision ag data.
Category: information science – Page 237
Addressing problems of bias in artificial intelligence, computer scientists from Princeton and Stanford University have developed methods to obtain fairer data sets containing images of people. The researchers propose improvements to ImageNet, a database of more than 14 million images that has played a key role in advancing computer vision over the past decade.
ImageNet, which includes images of objects and landscapes as well as people, serves as a source of training data for researchers creating machine learning algorithms that classify images or recognize elements within them. ImageNet’s unprecedented scale necessitated automated image collection and crowdsourced image annotation. While the database’s person categories have rarely been used by the research community, the ImageNet team has been working to address biases and other concerns about images featuring people that are unintended consequences of ImageNet’s construction.
“Computer vision now works really well, which means it’s being deployed all over the place in all kinds of contexts,” said co-author Olga Russakovsky, an assistant professor of computer science at Princeton. “This means that now is the time for talking about what kind of impact it’s having on the world and thinking about these kinds of fairness issues.”
Yes, you can detect another person’s consciousness. Christof Koch described a method called ‘zap and zip’. Transcranial magnetic stimulation is the ‘zap’. Brain activity is detected with an EEG and analyzed with a data compression algorithm, which is the ‘zip’. Then the value of the perturbational complexity index (PCI) is calculated. If the PCI is above 0.31 then you are conscious. If the PCI is below 0.31 then you are unconscious. If this link does not work then go to the library and look at the November 2017 issue of Scientific American. It is the cover story.
Zapping the brain with magnetic pulses while measuring its electrical activity is proving to be a reliable way to detect consciousness.
Brain-computer interfaces (BCIs) are tools that can connect the human brain with an electronic device, typically using electroencephalography (EEG). In recent years, advances in machine learning (ML) have enabled the development of more advanced BCI spellers, devices that allow people to communicate with computers using their thoughts.
So far, most studies in this area have focused on developing BCI classifiers that are faster and more reliable, rather than investigating their possible security vulnerabilities. Recent research, however, suggests that machine learning algorithms can sometimes be fooled by attackers, whether they are used in computer vision, speech recognition, or other domains. This is often done using adversarial examples, which are tiny perturbations in data that are indistinguishable by humans.
Researchers at Huazhong University of Science and Technology have recently carried out a study investigating the security of EEG-based BCI spellers, and more specifically, how they are affected by adversarial perturbations. Their paper, pre-published on arXiv, suggests that BCI spellers are fooled by these perturbations and are thus highly vulnerable to adversarial attacks.
Forget the Thighmaster. Someday you might add a spring to your step when walking or running using a pair of mechanically powered shorts.
Step up: The lightweight exoskeleton-pants were developed by researchers at Harvard University and the University of Nebraska, Omaha. They are the first device to assist with both walking and running, using an algorithm that adapts to each gait.
Making strides: The super-shorts show how wearable exoskeleton technology might someday help us perform all sorts of tasks. Progress in materials, actuators, and machine learning has led to a new generation of lighter, more powerful, and more adaptive wearable systems. Bulkier and heavier commercial systems are already used to help people with disabilities and workers in some factories and warehouses.
TrackML was a Kaggle competition in 2018 with $25 000 in cash prizes where the challenge was to reconstruct particle tracks from 3D points left in silicon detectors. CERN (the European Organization for Nuclear Research) provided data over particles collision events. The rate at which they occur over there is in the neighborhood of hundreds of millions of collisions per second, or tens of petabytes per year. There is a clear need to be as efficient as possible when sifting through such an amount of data, and this is where machine learning methods may be of help.
Particles, in this case protons, are boosted to high energies inside the Large Hadron Collider (LHC) — each beam can reach 6.5 TeV giving a total of 13 TeV when colliding. Electromagnetic fields are used to accelerate the electrically charged protons in a 27 kilometers long loop. When the proton beams collide they produce a diverse set of subatomic byproducts which quickly decay, holding valuable information for some of the most fundamental questions in physics.
Detectors are made of layers upon layers of subdetectors, each designed to look for specific particles or properties. There are calorimeters that measure energy, particle-identification detectors to pin down what kind of particle it is and tracking devices to calculate the path of a particle. [1] We are of course interested in the tracking, tiny electrical signals are recorded as particles move through those types of detectors. What I will discuss is methods to reconstruct these recorded patterns of tracks, specifically algorithms involving machine learning.
A new Einsteinian equation, ER=EPR, may be the clue physicists need to merge quantum mechanics with general relativity.
The drug, known as DSP-1181, was created by using algorithms to sift through potential compounds, checking them against a huge database of parameters, including a patient’s genetic factors. Speaking to the BBC, Exscientia chief executive Professor Andrew Hopkins described the trials as a “key milestone in drug discovery” and noted that there are “billions” of decisions needed to find the right molecules for a drug, making their eventual creation a “huge decision.” With AI, however, “the beauty of the algorithm is that they are agnostic, so can be applied to any disease.”
We’ve already seen multiple examples of AI being used to diagnose illness and analyze patient data, so using it to engineer drug treatment is an obvious progression of its place in medicine. But the AI-created drugs do pose some pertinent questions. Will patients be comfortable taking medication designed by a machine? How will these drugs differ from those developed by humans alone? Who will make the rules for the use of AI in drug research? Hopkins and his team hope that these and myriad other questions will be explored in the trials, which will begin in March.
The hidden secret of artificial intelligence is that much of it is actually powered by humans. Well, to be specific, the supervised learning algorithms that have gained much of the attention recently are dependent on humans to provide well-labeled training data that can be used to train machine learning algorithms. Since machines have to first be taught, they can’t teach themselves (yet), so it falls upon the capabilities of humans to do this training. This is the secret achilles heel of AI: the need for humans to teach machines the things that they are not yet able to do on their own.
Machine learning is what powers today’s AI systems. Organizations are implementing one or more of the seven patterns of AI, including computer vision, natural language processing, predictive analytics, autonomous systems, pattern and anomaly detection, goal-driven systems, and hyperpersonalization across a wide range of applications. However, in order for these systems to be able to create accurate generalizations, these machine learning systems must be trained on data. The more advanced forms of machine learning, especially deep learning neural networks, require significant volumes of data to be able to create models with desired levels of accuracy. It goes without saying then, that the machine learning data needs to be clean, accurate, complete, and well-labeled so the resulting machine learning models are accurate. Whereas it has always been the case that garbage in is garbage out in computing, it is especially the case with regards to machine learning data.
According to analyst firm Cognilytica, over 80% of AI project time is spent preparing and labeling data for use in machine learning projects:
If you’re interested in mind uploading, then I have an excellent article to recommend. This wide-ranging article is focused on neuromorphic computing and has sections on memristors. Here is a key excerpt:
“…Perhaps the most exciting emerging AI hardware architectures are the analog crossbar approaches since they achieve parallelism, in-memory computing, and analog computing, as described previously. Among most of the AI hardware chips produced in roughly the last 15 years, an analog memristor crossbar-based chip is yet to hit the market, which we believe will be the next wave of technology to follow. Of course, incorporating all the primitives of neuromorphic computing will likely require hardware solutions even beyond analog memristor crossbars…”
Here’s a web link to the research paper:
Computers have undergone tremendous improvements in performance over the last 60 years, but those improvements have significantly slowed down over the last decade, owing to fundamental limits in the underlying computing primitives. However, the generation of data and demand for computing are increasing exponentially with time. Thus, there is a critical need to invent new computing primitives, both hardware and algorithms, to keep up with the computing demands. The brain is a natural computer that outperforms our best computers in solving certain problems, such as instantly identifying faces or understanding natural language. This realization has led to a flurry of research into neuromorphic or brain-inspired computing that has shown promise for enhanced computing capabilities. This review points to the important primitives of a brain-inspired computer that could drive another decade-long wave of computer engineering.