Toggle light / dark theme

Efficient single-winged aerial robots with reduced energy consumption

Flying robotic systems have already proved to be highly promising for tackling numerous real-world problems, including explorations of remote environments, the delivery of packages in inaccessible sites, and searches for survivors of natural disasters. In recent years, roboticists and computer scientists have introduced a multitude of aerial vehicle designs, each with distinct advantages and features.

Researchers at Sharif University of Technology in Iran recently carried out a study exploring the potential of flying with a single wing, known as mono-wing aerial vehicles. Their paper, published in the Journal of Intelligent & Robotic Systems, outlines a new approach that could help to better control the flight of these vehicles as they navigate their surrounding environment.

“Unconventional vehicles inspired by natural phenomena consistently captivate the attention of engineers,” Afshin Banazadeh, one of the researchers who carried out the study, told Tech Xplore. “One such , the mono-wing, a single-bladed aerial vehicle, is no exception.

How Generative AI Will Transform Cybersecurity

One of the most promising developments in the fight against cybersecurity threats is the use of artificial intelligence (AI). This cutting-edge technology has the potential to revolutionize the way organizations manage cyberthreats, offering unprecedented levels of protection and adaptability. AI is set to be embedded into every security product, enabling organizations to quickly remediate attacks and stay ahead of the threat landscape. However, bad actors are equally interested in unlocking the power of AI to easily launch sophisticated and targeted attacks.

The convergence of AI and cybersecurity will create opportunities and challenges for organizations. In this blog post, we will delve into the transformative impact that AI will have on cybersecurity, explore its potential to empower organizations to stay ahead of threats, and examine the ways bad actors could use it for their own nefarious purposes.

By harnessing the power of AI while remaining vigilant to its potential misuse, organizations can stay ahead of emerging threats and better protect their valuable applications, APIs, and data.

New robot searches for solar cell materials 14 times faster

Earlier this year, two-layer solar cells broke records with 33 percent efficiency. The cells are made of a combination of silicon and a material called a perovskite. However, these tandem solar cells are still far from the theoretical limit of around 45 percent efficiency, and they degrade quickly under sun exposure, making their usefulness limited.

The process of improving tandem solar cells involves the search for the perfect materials to layer on top of each other, with each capturing some of the sunlight the other is missing. One potential material for this is perovskites, which are defined by their peculiar rhombus-in-a-cube crystal structure. This structure can be adopted by many chemicals in a variety of proportions. To make a good candidate for tandem solar cells, the combination of chemicals needs to have the right bandgap—the property responsible for absorbing the right part of the sun’s spectrum—be stable at normal temperatures, and, most challengingly, not degrade under illumination.

The number of possible perovskite materials is vast, and predicting the properties that a given chemical composition will have is very difficult. Trying all the possibilities out in the lab is prohibitively costly and time-consuming. To accelerate the search for the ideal perovskite, researchers at North Carolina State University decided to enlist the help of robots.

William Shatner, Star Trek’s Captain Kirk, takes on an AI chatbot

LOS ANGELES, Aug 24 (Reuters) — Legendary “Star Trek” actor William Shatner has been spending time exploring the new frontier of artificial intelligence.

The actor best known for playing Captain Kirk on “Star Trek” talked with ProtoBot, a device that combines holographic visuals with conversational AI, and grappled with philosophical and ethical questions about the technology.

“I’m asking ProtoBot questions that ordinarily a computer doesn’t answer,” Shatner told Reuters. “A computer answers two plus two, but does ProtoBot know what love is? Can ProtoBot understand sentience? Can they understand emotion? Can they understand fear?”

Meta releases Code Llama, a code-generating AI model

Meta, intent on making a splash in a generative AI space rife with competition, is on something of an open source tear.

Following the release of AI models for generating text, translating languages and creating audio, the company today open sourced Code Llama, a machine learning system that can generate and explain code in natural language — specifically English.

Akin to GitHub Copilot and Amazon CodeWhisperer, as well as open source AI-powered code generators like StarCoder, StableCode and PolyCoder, Code Llama can complete code and debug existing code across a range of programming languages, including Python, C++, Java, PHP, Typescript, C# and Bash.

Machine learning is revolutionising our understanding of particle “jets”

What happens when – instead of recording a single particle track or energy deposit in your detector – you see a complex collection of many particles, with many tracks, that leaves a large amount of energy in your calorimeters? Then congratulations: you’ve recorded a “jet”! Jets are the complicated experimental signatures left behind by showers of strongly-interacting quarks and gluons. By studying the internal energy flow of a jet – also known as the “jet substructure” – physicists can learn about the kind of particle that created it. For instance, several hypothesised new particles could decay into heavy Standard Model particles at extremely high (or “boosted”) energies. These particles could then decay into multiple quarks, leaving behind “boosted”, multi-pronged jets in the ATLAS experiment. Physicists use “taggers” to distinguish these jets from background jets created by single quarks and gluons. The type of quarks produced in the jet can also give extra information about the original particle. For example, Higgs bosons and top quarks often decay to b-quarks – seen in ATLAS as “b-jets” – which can be distinguished from other kinds of jets using the long lifetime of the B-hadron. The complexity of jets naturally lends itself to Artificial Intelligence (AI) algorithms, which are able to efficiently distil large amounts of information into accurate decisions. AI algorithms have been a regular part of ATLAS data analysis for several years, with ATLAS physicists continuously pushing these tools to new limits. This week, ATLAS physicists presented four exciting new results about jet tagging using AI algorithms at the BOOST 2023 conference held at Lawrence Berkeley National Lab (USA). Figure 1: The graphs showing the full declustering shower development and the primary Lund jet plane in red are shown in (left) for a jet originating from a W-boson and in (right) for a jet originating from a light-quark. (Image: ATLAS Collaboration/CERN) Artificial intelligence is revolutionising how ATLAS researchers identify – or ‘tag’ – what types of particles create jets in the experiment. Two results showcased new ATLAS taggers used for identifying jets coming from a boosted W-boson decay as opposed to background jets originating from light quarks and gluons. Typically, AI algorithms are trained on “high-level” jet substructure information recorded by the ATLAS inner detector and calorimeters – such as the jet mass, energy correlation ratios and jet splitting scales. These new studies instead use “low-level” information from these same detectors – such as the direct kinematic properties of a jet’s constituents or the novel two-dimensional parameterisation of radiation within a jet (known as the “Lund Jet plane”), built from the jet’s constituents and using graphs based on the particle-shower development (see Figure 1). These new taggers made it possible to separate the shape of signal and background far more effectively than any high-level taggers could do alone (see Figure 2). In particular, the Lund Jet plane-based tagger outperforms the other methods, by using the same input to the AI networks but in a different format inspired by the physics of the jet shower development. A similar evolution was followed for the development of a new boosted Higgs tagger, which identifies jets originating from boosted Higgs bosons decaying hadronically to two b-quarks or c-quarks. It also uses low-level information – in this case, tracks reconstructed from the inner detector associated with the single jet containing the Higgs boson decays. This new tagger is the most performant tagger to date, and represents a factor of 1.6 to 2.5 improvement, at a 50% boosted Higgs signal efficiency, over the previous version of the tagger, which used high-level information from the jet and b/c-quark decays as input for a neural network (see Figure 3). Figure 2: Signal efficiency as a function of the background rejection for the different W-boson taggers: one is based on the Lund jet plane, while the others use unordered sets of particles or graphs with additional structure. (Image: ATLAS Collaboration/CERN) Figure 3: Top and multijet rejections as a function of the H→bb signal efficiency. Performance of the new boosted Higgs tagger is compared to the previous taggers using high-level information from the jet b-quark decays. (Image: ATLAS Collaboration/CERN) Finally, ATLAS researchers presented two new taggers that aim to differentiate between jets originating from quarks and those originating from gluons. One tagger looked at the charged-particle constituent multiplicity of the jets being tagged, while the other combined several jet kinematic and jet substructure variables using a Boosted Decision Tree. Physicists compared the performance of these quark/gluon taggers; Figure 4 shows the rejection of gluon jets as a function of quark selection efficiency in simulation. Several studies of Standard-Model processes – including vector boson fusion – and new physics searches with quark-rich signals could greatly benefit from these taggers. However, in order for them to be used in analyses, additional corrections on the signal efficiency and background rejection need to be applied to bring the performance of the taggers in data and simulation to be the same. Researchers measured both the efficiency and rejection rates in Run-2 data for these taggers, and found good agreement between the measured data and predictions; therefore, only small corrections are needed. The excellent performance of these new jet taggers does not come without questions. Crucially, how can researchers interpret what the machine-learning models learned? And why do more complex architectures show a stronger dependence on the modelling of simulated physics processes used for the training, as shown in the two W-tagging studies? Challenges aside, these taggers set an outstanding baseline for analysing LHC Run-3 data. Given the current strides being made in machine learning, its continued application to particle physics will hopefully increase the understanding of jets and revolutionise the ATLAS physics programme in the years to come. Figure 4: Signal efficiency as a function of the background rejection for different quark taggers. The use of machine learning (BDT) results in an improved performance. (Image: ATLAS Collaboration/CERN) Learn more Tagging boosted W bosons with the Lund jet plane in ATLAS (ATL-PHYS-PUB-2023–017) Constituent-based W-boson tagging with the ATLAS detector (ATL-PHYS-PUB-2023–020) Transformer Neural Networks for Identifying Boosted Higgs Bosons decaying into bb and cc in ATLAS (ATL-PHYS-PUB-2023–021) Performance and calibration of quark/gluon-jet taggers using 140 fb−1 of proton–proton collisions at 13 TeV with the ATLAS detector (JETM-2020–02) Comparison of ML algorithms for boosted W boson tagging (JETM-2023–003) Summary of new ATLAS results from BOOST 2023, ATLAS News, 31 July 2023.

Study shows potential for generative AI to increase access and efficiency in healthcare

A new study led by investigators from Mass General Brigham has found that ChatGPT was about 72 percent accurate in overall clinical decision making, from coming up with possible diagnoses to making final diagnoses and care management decisions. The large-language model (LLM) artificial intelligence chatbot performed equally well in both primary care and emergency settings across all medical specialties. The research team’s results are published in the Journal of Medical Internet Research.

Our paper comprehensively assesses decision support via ChatGPT from the very beginning of working with a patient through the entire care scenario, from differential diagnosis all the way through testing, diagnosis, and management. No real benchmarks exists, but we estimate this performance to be at the level of someone who has just graduated from medical school, such as an intern or resident. This tells us that LLMs in general have the potential to be an augmenting tool for the practice of medicine and support clinical decision making with impressive accuracy.