Toggle light / dark theme

Natural language processing continues to find its way into unexpected corners. This time, it’s phishing emails. In a small study, researchers found that they could use the deep learning language model GPT-3, along with other AI-as-a-service platforms, to significantly lower the barrier to entry for crafting spearphishing campaigns at a massive scale.

Researchers have long debated whether it would be worth the effort for scammers to train machine learning algorithms that could then generate compelling phishing messages. Mass phishing messages are simple and formulaic, after all, and are already highly effective. Highly targeted and tailored “spearphishing” messages are more labor intensive to compose, though. That’s where NLP may come in surprisingly handy.

At the Black Hat and Defcon security conferences in Las Vegas this week, a team from Singapore’s Government Technology Agency presented a recent experiment in which they sent targeted phishing emails they crafted themselves and others generated by an AI-as-a-service platform to 200 of their colleagues. Both messages contained links that were not actually malicious but simply reported back clickthrough rates to the researchers. They were surprised to find that more people clicked the links in the AI-generated messages than the human-written ones—by a significant margin.

No, it’s not forbidden to innovate, quite the opposite, but it’s always risky to do something different from what people are used to. Risk is the middle name of the bold, the builders of the future. Those who constantly face resistance from skeptics. Those who fail eight times and get up nine.

(Credit: Adobe Stock)

Fernando Pessoa’s “First you find it strange. Then you can’t get enough of it.” contained intolerable toxicity levels for Salazar’s Estado Novo (Portugal). When the level of difference increases, censorship follows. You can’t censor censorship (or can you?) when, deep down, it’s a matter of fear of difference. Yes, it’s fear! Fear of accepting/facing the unknown. Fear of change.

What do I mean by this? Well, I may seem weird or strange with the ideas and actions I take in life, but within my weirdness, there is a kind of “Eye of Agamotto” (sometimes being a curse for me)… What I see is authentic and vivid. Sooner or later, that future I glimpse passes into this reality.

Transformer-based deep learning models like GPT-3 have been getting much attention in the machine learning world. These models excel at understanding semantic relationships, and they have contributed to large improvements in Microsoft Bing’s search experience. However, these models can fail to capture more nuanced relationships between query and document terms beyond pure semantics.

The Microsoft team of researchers developed a neural network with 135 billion parameters, which is the largest “universal” artificial intelligence that they have running in production. The large number of parameters makes this one of the most sophisticated AI models ever detailed publicly to date. OpenAI’s GPT-3 natural language processing model has 175 billion parameters and remains as the world’s largest neural network built to date.

Microsoft researchers are calling their latest AI project MEB (Make Every Feature Binary). The 135-billion parameter machine is built to analyze queries that Bing users enter. It then helps identify the most relevant pages from around the web with a set of other machine learning algorithms included in its functionality, and without performing tasks entirely on its own.

Summary: Blood tests revealed specific epigenetic biomarkers for schizophrenia. Researchers applied machine learning to analyze the CoRSIVs region of the human genome to identify the schizophrenia biomarkers. Testing the model with an independent data set revealed the AI technology can detect schizophrenia with 80% accuracy.

Source: Baylor College of Medicine.

An innovative strategy that analyzes a region of the genome offers the possibility of early diagnosis of schizophrenia, reports a team led by researchers at Baylor College of Medicine. The strategy applied a machine learning algorithm called SPLS-DA to analyze specific regions of the human genome called CoRSIVs, hoping to reveal epigenetic markers for the condition.

The paper’s authors said they’ve created an endlessly challenging virtual playground for AI. The world, called XLand, is a vibrant video game managed by an AI overlord and populated by algorithms that must learn the skills to navigate it.

The game-managing AI keeps an eye on what the game-playing algorithms are learning and automatically generates new worlds, games, and tasks to continuously confront them with new experiences.

The team said some veteran algorithms faced 3.4 million unique tasks while playing around 700000 games in 4000 XLand worlds. But most notably, they developed a general skillset not related to any one game, but useful in all of them.

Robots today have been programmed to vacuum the floor or perform a preset dance, but there is still much work to be done before they can achieve their full potential. This mainly has something to do with how robots are unable to recognize what is in their environment at a deep level and therefore cannot function properly without being told all of these details by humans. For instance, while it may seem like backup programming for when bumping into an object that would help prevent unwanted collisions from happening again, this idea isn’t actually based on understanding anything about chairs because the robot doesn’t know exactly what one is!

Facebook AI team just released Droidlet, a new platform that makes it easier for anyone to build their smart robot. It’s an open-source project explicitly designed with hobbyists and researchers in mind so you can quickly prototype your AI algorithms without having to spend countless hours coding everything from scratch.

Droidlet is a platform for building embodied agents capable of recognizing, reacting to, and navigating the world. It simplifies integrating all kinds of state-of-the-art machine learning algorithms in these systems so that users can prototype new ideas faster than ever before!

DeepMind CEO and co-founder. “We believe this work represents the most significant contribution AI has made to advancing the state of scientific knowledge to date. And I think it’s a great illustration and example of the kind of benefits AI can bring to society. We’re just so excited to see what the community is going to do with this.” https://www.futuretimeline.net/images/socialmedia/


AlphaFold is an artificial intelligence (AI) program that uses deep learning to predict the 3D structure of proteins. Developed by DeepMind, a London-based subsidiary of Google, it made headlines in November 2020 when competing in the Critical Assessment of Structure Prediction (CASP). This worldwide challenge is held every two years by the scientific community and is the most well-known protein modelling benchmark. Participants must “blindly” predict the 3D structures of different proteins, and their computational methods are subsequently compared with real-world laboratory results.

The CASP challenge has been held since 1994 and uses a metric known as the Global Distance Test (GDT), ranging from 0 to 100. Winners in previous years had tended to hover around the 30 to 40 mark, with a score of 90 considered to be equivalent to an experimentally determined result. In 2018, however, the team at DeepMind achieved a median of 58.9 for the GDT and an overall score of 68.5 across all targets, by far the highest of any algorithm.

Then in 2020, version 2.0 of their AlphaFold program competed in the CASP, winning once again – this time with even greater accuracy. The AlphaFold 2.0 achieved a median of 92.4 across all targets, with its average margin of error comparable to the width of an atom (0.16 nanometres). Andrei Lupas, biologist at the Max Planck Institute in Germany who assessed the performances of each team in CASP, said of AlphaFold: “This will change medicine. It will change research. It will change bioengineering. It will change everything.”

Granted, it’s a little different for a robot, since they don’t have lungs or a heart. But they do have a “brain” (software), “muscles” (hardware), and “fuel” (a battery), and these all had to work together for Cassie to be able to run.

The brunt of the work fell to the brain—in this case, a machine learning algorithm developed by students at Oregon State University’s Dynamic Robotics Laboratory. Specifically, they used deep reinforcement learning, a method that mimics the way humans learn from experience by using a trial-and-error process guided by feedback and rewards. Over many repetitions, the algorithm uses this process to learn how to accomplish a set task. In this case, since it was trying to learn to run, it may have tried moving the robot’s legs varying distances or at distinct angles while keeping it upright.

Once Cassie got a good gait down, completing the 5K was as much a matter of battery life as running prowess. The robot covered the whole distance (a course circling around the university campus) on a single battery charge in just over 53 minutes, but that did include six and a half minutes of troubleshooting; the computer had to be reset after it overheated, as well as after Cassie fell during a high-speed turn. But hey, an overheated computer getting reset isn’t so different from a human runner pausing to douse their head and face with a cup of water to cool off, or chug some water to rehydrate.

Researchers at the University of Sydney and quantum control startup Q-CTRL today announced a way to identify sources of error in quantum computers through machine learning, providing hardware developers the ability to pinpoint performance degradation with unprecedented accuracy and accelerate paths to useful quantum computers.

A joint scientific paper detailing the research, titled “Quantum Oscillator Noise Spectroscopy via Displaced Cat States,” has been published in the Physical Review Letters, the world’s premier physical science research journal and flagship publication of the American Physical Society (APS Physics).

Focused on reducing errors caused by environmental “noise”—the Achilles’ heel of —the University of Sydney team developed a technique to detect the tiniest deviations from the precise conditions needed to execute quantum algorithms using trapped ion and superconducting quantum computing hardware. These are the core technologies used by world-leading industrial quantum computing efforts at IBM, Google, Honeywell, IonQ, and others.