Toggle light / dark theme

A face recognition framework based on vision transformers

Face recognition tools are computational models that can identify specific people in images, as well as CCTV or video footage. These tools are already being used in a wide range of real-world settings, for instance aiding law enforcement and border control agents in their criminal investigations and surveillance efforts, and for authentication and biometric applications. While most existing models perform remarkably well, there may still be much room for improvement.

Researchers at Queen Mary University of London have recently created a new and promising for face recognition. This architecture, presented in a paper pre-published on arXiv, is based on a strategy to extract from images that differs from most of those proposed so far.

“Holistic methods using (CNNs) and margin-based losses have dominated research on face recognition,” Zhonglin Sun and Georgios Tzimiropoulos, the two researchers who carried out the study, told TechXplore.

Using machine learning to better understand how water behaves

Water has puzzled scientists for decades. For the last 30 years or so, they have theorized that when cooled down to a very low temperature like-100C, water might be able to separate into two liquid phases of different densities. Like oil and water, these phases don’t mix and may help explain some of water’s other strange behavior, like how it becomes less dense as it cools.

It’s almost impossible to study this phenomenon in a lab, though, because crystallizes into ice so quickly at such low temperatures. Now, new research from the Georgia Institute of Technology uses machine learning models to better understand water’s phase changes, opening more avenues for a better theoretical understanding of various substances. With this technique, the researchers found strong computational evidence in support of water’s liquid-liquid transition that can be applied to real-world systems that use water to operate.

“We are doing this with very detailed quantum chemistry calculations that are trying to be as close as possible to the real physics and physical chemistry of water,” said Thomas Gartner, an assistant professor in the School of Chemical and Biomolecular Engineering at Georgia Tech. “This is the first time anyone has been able to study this transition with this level of accuracy.”

Autonomous Estimation of High-Dimensional Coulomb Diamonds from Sparse Measurements

In spin-based quantum processors, each quantum dot of a qubit is populated by exactly one electron, which requires careful tuning of each gate voltage such that it lies inside the charge-stability region (the “Coulomb diamond’‘) associated with the dot array. However, mapping the boundary of a multidimensional Coulomb diamond by traditional dense raster scanning would take years, so the authors develop a sparse acquisition technique that autonomously learns Coulomb-diamond boundaries from a small number of measurements. Here we have hardware-triggered line searches in the gate-voltage space of a silicon quadruple dot, with smart search directions proposed by an active-learning algorithm.

Scientists use machine learning to get an unprecedented view of small molecules

A new machine learning model will help scientists identify small molecules, with applications in medicine, drug discovery and environmental chemistry. Developed by researchers at Aalto University and the University of Luxembourg, the model was trained with data from dozens of laboratories to become one of the most accurate tools for identifying small molecules.

Thousands of different small molecules, known as , transport energy and transmit cellular information throughout the human body. Because they are so small, metabolites are difficult to distinguish from each other in a blood sample analysis—but identifying these molecules is important to understand how exercise, nutrition, and metabolic disorders affect well-being.

Metabolites are normally identified by analyzing their mass and retention time with a separation technique called liquid chromatography followed by mass spectrometry. This technique first separates metabolites by running the sample through a column, which results in different flow rates—or retention times—through the measurement device.

Project Liftoff: The Future Of Robot Combat is AI. This Is Havoc Episode 4

Father son duo Jim and Andrew Kazmer build and drive one of the most exciting and best supported robots at NHRL in Project Liftoff.

They’ve further developed this into a second bot in Flip n Cut with a variation in weapon type and have pushed the limits of innovation with their fully autonomous combat robot DeepMelt.

How does a fully autonomous robot work, and how will it assist human drivers in future?
What is a Meltybrain, how does it work?
Why is the choice of wheel so important?
Will we see a 250lb Project Liftoff?

Find out in the episode 4 of This Is Havoc: Liftoff.

NHRL is the biggest and most accessible robot combat league in the world, home of the 3lb, 12lb and 30lb robot combat world championships.

We are one of the toughest places to win, but also one of the most friendly and welcoming for all ages and experience.

This mechanical engineer is building robots to harvest raspberries

Around 38% of the world’s total landmass is used for agriculture – yet hunger is worsening, and food security is in crisis, threatened by pressures including climate change, conflict and global recessions.

While there’s no one-stop solution, technology can help to fill some of the gaps. Mechanical engineer Josie Hughes is on a mission to show how robotics can play a role in our everyday lives, particularly when it comes to food. Starting with LEGO robots as a child, the Cambridge graduate now leads the Computational Robot Design & Fabrication Lab (CREATE) at the Swiss Federal Institute of Technology Lausanne (EPFL), where she’s one of the youngest researchers to join as a tenure-track assistant professor.

One of her innovations, a raspberry-picking robot powered by artificial intelligence, could help make farming more efficient and cost-effective, and solve labor shortages – which in the UK alone left £60 million ($74 million) worth of fruit and vegetables rotting in fields this summer. CNN spoke with Hughes about her research, and when robots might be harvesting your next meal.

Apple Pushing to Launch Search Engine to Rival Google

Apple is working on an online search engine to rival Google amid wider improvements to Spotlight search, according to a recent report from The Information.

The report explains that Apple’s work on search technology is facing setbacks amid a loss of talent to Google. In 2018, Apple sought to bolster development of its own web search engine by buying machine learning startup Laserlike, which was founded by three former Google search engineers. The company’s technology recommended websites based on a user’s interests and browsing history. Now, Laserlike’s founders have reportedly returned to Google.

The text-to-image revolution, explained

How programmers turned the internet into a paintbrush. DALL-E 2, Midjourney, Imagen, explained.

Subscribe and turn on notifications 🔔 so you don’t miss any videos: http://goo.gl/0bsAjO

Beginning in January 2021, advances in AI research have produced a plethora of deep-learning models capable of generating original images from simple text prompts, effectively extending the human imagination. Researchers at OpenAI, Google, Facebook, and others have developed text-to-image tools that they have not yet released to the public, and similar models have proliferated online in the open-source arena and at smaller companies like Midjourney.

These tools represent a massive cultural shift because they remove the requirement for technical labor from the process of image-making. Instead, they select for creative ideation, skillful use of language, and curatorial taste. The ultimate consequences are difficult to predict, but — like the invention of the camera, and the digital camera thereafter — these algorithms herald a new, democratized form of expression that will commence another explosion in the volume of imagery produced by humans. But, like other automated systems trained on historical data and internet images, they also come with risks that have not been resolved.

The video above is a primer on how we got here, how this technology works, and some of the implications. And for an extended discussion about what this means for human artists, designers, and illustrators, check out this bonus video: https://youtu.be/sFBfrZ-N3G4

Midjourney: www.midjourney.com.

/* */