Toggle light / dark theme

A new report from the University of Technology Sydney (UTS) Human Technology Institute outlines a model law for facial recognition technology to protect against harmful use of this technology, but also foster innovation for public benefit.

Australian law was not drafted with widespread use of facial recognition in mind. Led by UTS Industry Professors Edward Santow and Nicholas Davis, the report recommends reform to modernize Australian law, especially to address threats to and other human rights.

Facial recognition and other remote biometric technologies have grown exponentially in recent years, raising concerns about the privacy, mass and unfairness experienced, especially by people of color and women, when the technology makes mistakes.

Summary: Most AI models are unable to represent features of human vision, making them worse at recognizing images.

Source: HSE

Researchers from HSE University and Moscow Polytechnic University have discovered that AI models are unable to represent features of human vision due to a lack of tight coupling with the respective physiology, so they are worse at recognizing images.

Using artificial intelligence, physicists have compressed a daunting quantum problem that until now required 100,000 equations into a bite-size task of as few as four equations—all without sacrificing accuracy. The work, published in the September 23 issue of Physical Review Letters, could revolutionize how scientists investigate systems containing many interacting electrons. Moreover, if scalable to other problems, the approach could potentially aid in the design of materials with sought-after properties such as superconductivity or utility for clean energy generation.

“We start with this huge object of all these coupled-together differential equations; then we’re using to turn it into something so small you can count it on your fingers,” says study lead author Domenico Di Sante, a visiting research fellow at the Flatiron Institute’s Center for Computational Quantum Physics (CCQ) in New York City and an assistant professor at the University of Bologna in Italy.

The formidable problem concerns how electrons behave as they move on a gridlike lattice. When two electrons occupy the same lattice site, they interact. This setup, known as the Hubbard model, is an idealization of several important classes of materials and enables scientists to learn how electron behavior gives rise to sought-after phases of matter, such as superconductivity, in which electrons flow through a material without resistance. The model also serves as a testing ground for new methods before they’re unleashed on more complex quantum systems.

Saildrone/NOAA

Recently, the collaboration between the U.S. National Oceanic and Atmospheric Administration (NOAA) and Saildrone, a company that develops sailing drones, did exactly that. They sent a robot into Hurricane Fiona, the tropical storm that has deluged Puerto Rico and is now headed towards Canada’s east coast, Mashable reported.

Disperse, a U.K.-based construction tech company that offers an artificial intelligence (AI)-powered platform to help project managers track work and capture data from building sites, has raised $16 million in funding.

Founded out of London in 2015, Disperse effectively creates a digital version of an entire construction site, including visual snapshots that track the progress of work to help all stakeholders — regardless of where they’re based — keep up with things. For this, Disperse sends someone around a site at regular intervals with a standard 360° camera, and the resulting imagery is fed directly into the Disperse platform which processes the visuals and applies computer vision techniques to figure out what’s happening.

For example, this can help to show the state of a project at a given moment in time, and solve disputes should they arise in terms of determining whether a job was completed as it should’ve been. It also automatically spotlights potential problems or bottlenecks while they can still be resolved.

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

The circuits are found in several forms and in different locations. Some offer faster creation of new AI models. They use multiple processing circuits in parallel to churn through millions, billions or even more data elements, searching for patterns and signals. These are used in the lab at the beginning of the process by AI scientists looking for the best algorithms to understand the data.

Scientists from Google DeepMind have been awarded a $3 million prize for developing an artificial intelligence (AI) system that has predicted how nearly every known protein folds into its 3D shape.

One of this year’s Breakthrough Prizes in Life Sciences went to Demis Hassabis, the co-founder and CEO of DeepMind, which created the protein-predicting program known as AlphaFold, and John Jumper, a senior staff research scientist at DeepMind, the Breakthrough Prize Foundation announced (opens in new tab) Thursday (Sept. 22).

After 45 years of voicing one of the most iconic characters in cinema history, James Earl Jones has said goodbye to Darth Vader. At 91, the legendary actor recently told Disney he was “looking into winding down this particular character.” That forced the company to ask itself how do you even replace Jones? The answer Disney eventually settled on, with the actor’s consent, involved an AI program.

If you’ve seen any of the recent Star Wars shows, you’ve heard the work of Respeecher. It’s a Ukrainian startup that uses archival recordings and a “proprietary AI algorithm” to create new dialogue featuring the voices of “performers from long ago.” In the case of Jones, the company worked with Lucasfilm to recreate his voice as it had sounded when film audiences first heard Darth Vader in 1977.