Toggle light / dark theme

This Cyber Security Service Utilizes Artificial Intelligence

This post is also available in: he עברית (Hebrew)

As everyday technologies get more and more advanced, cyber security must be at the forefront of every customer. Cyber security services have become common and are often used by private companies and the public sector in order to protect themselves from potential cyber attacks.

One of these services goes under the name Darktrace and has recently been acquired by Cybersprint, a Dutch provider of advanced cyber security services and a manufacturer of special tools that use machine learning algorithms to detect cyber vulnerabilities. Based on attack path modeling and graph theory, Darktrace’s platform represents organizational networks as directional, weighted graphs with nodes where multi-line segments meet and edges where they join. In order to estimate the probability that an attacker will be able to successfully move from node A to node B, a weighted graph can be used. Understanding the insights gained will make it easier for Darktrace to simulate future attacks.

Towards the interpretability of deep learning models for multi-modal neuroimaging: Finding structural changes of the ageing brain

Brain-age (BA) estimates based on deep learning are increasingly used as neuroimaging biomarker for brain health; however, the underlying neural features have remained unclear. We combined ensembles of convolutional neural networks with Layer-wise Relevance Propagation (LRP) to detect which brain features contribute to BA. Trained on magnetic resonance imaging (MRI) data of a population-based study (n = 2,637, 18–82 years), our models estimated age accurately based on single and multiple modalities, regionally restricted and whole-brain images (mean absolute errors 3.37–3.86 years). We find that BA estimates capture ageing at both small and large-scale changes, revealing gross enlargements of ventricles and subarachnoid spaces, as well as white matter lesions, and atrophies that appear throughout the brain. Divergence from expected ageing reflected cardiovascular risk factors and accelerated ageing was more pronounced in the frontal lobe. Applying LRP, our study demonstrates how superior deep learning models detect brain-ageing in healthy and at-risk individuals throughout adulthood.

Detecting Proof Of Life In Mars Samples May Be Well-Nigh Impossible

Finding definitive evidence for past primitive life in ancient Mars rock and soil samples may be well-nigh impossible, renowned geologist and astrobiologist Frances Westall told me at the recent Europlanet Science Congress (EPSC) in Granada, Spain. And she should know. Westall is someone who still claims the discovery of Earth’s oldest-known microfossils, dating back some 3.45-billion-years ago.

But it’s hard enough to identify primitive microfossils in Earth’s oldest rocks, much less from robotic samples taken on Mars. Thus, if we have a hard time identifying past life on Earth, what hope do we have of doing it with Mars samples?

“I think it’s going to be really difficult,” said Westall, a researcher at France’s Center for Molecular Biophysics in Orleans. “I can tell you, there’s going to be a lot of arguments about it.”

Bioinspired robots walk, swim, slither and fly

Such robotic schools could be tasked with locating and recording data on coral reefs to help researchers to study the reefs’ health over time. Just as living fish in a school might engage in different behaviours simultaneously — some mating, some caring for young, others finding food — but suddenly move as one when a predator approaches, robotic fish would have to perform individual tasks while communicating to each other when it’s time to do something different.

“The majority of what my lab really looks at is the coordination techniques — what kinds of algorithms have evolved in nature to make systems work well together?” she says.

Many roboticists are looking to biology for inspiration in robot design, particularly in the area of locomotion. Although big industrial robots in vehicle factories, for instance, remain anchored in place, other robots will be more useful if they can move through the world, performing different tasks and coordinating their behaviour.

Posits, a New Kind of Number, Improves the Math of AI

Training the large neural networks behind many modern AI tools requires real computational might: For example, OpenAI’s most advanced language model, GPT-3, required an astounding million billion billions of operations to train, and cost about US $5 million in compute time. Engineers think they have figured out a way to ease the burden by using a different way of representing numbers.

Back in 2017, John Gustafson, then jointly appointed at A*STAR Computational Resources Centre and the National University of Singapore, and Isaac Yonemoto, then at Interplanetary Robot and Electric Brain Co., developed a new way of representing numbers. These numbers, called posits, were proposed as an improvement over the standard floating-point arithmetic processors used today.

Now, a team of researchers at the Complutense University of Madrid have developed the first processor core implementing the posit standard in hardware and showed that, bit-for-bit, the accuracy of a basic computational task increased by up to four orders of magnitude, compared to computing using standard floating-point numbers. They presented their results at last week’s IEEE Symposium on Computer Arithmetic.

A computational shortcut for neural networks

Neural networks are learning algorithms that approximate the solution to a task by training with available data. However, it is usually unclear how exactly they accomplish this. Two young Basel physicists have now derived mathematical expressions that allow one to calculate the optimal solution without training a network. Their results not only give insight into how those learning algorithms work, but could also help to detect unknown phase transitions in physical systems in the future.

Neural networks are based on the principle of operation of the brain. Such computer algorithms learn to solve problems through repeated training and can, for example, distinguish objects or process spoken language.

For several years now, physicists have been trying to use to detect as well. Phase transitions are familiar to us from everyday experience, for instance when water freezes to ice, but they also occur in more complex form between different phases of magnetic materials or , where they are often difficult to detect.