New potential drug combo for kids with brain cancer found using BenevolentAI.
This week, The European Parliament, the body responsible for adopting European Union (EU) legislation, passed a non-binding resolution calling for a ban on law enforcement use of facial recognition technology in public places. The resolution, which also proposes a moratorium on the deployment of predictive policing software, would restrict the use of remote biometric identification unless it’s to fight “serious” crime, such as kidnapping and terrorism.
The approach stands in contrast to that of U.S. agencies, which continue to embrace facial recognition even in light of studies showing the potential for ethnic, racial, and gender bias. A recent report from the U.S. Government Accountability Office found that 10 branches including the Departments of Agriculture, Commerce, Defense, and Homeland Security plan to expand their use of facial recognition between 2020 and 2023 as they implement as many as 17 different facial recognition systems.
Commercial face-analyzing systems have been critiqued by scholars and activists alike throughout the past decade, if not longer. The technology and techniques — everything from sepia-tinged film to low-contrast digital cameras — often favor lighter skin, encoding racial bias in algorithms. Indeed, independent benchmarks of vendors’ systems by the Gender Shades project and others have revealed that facial recognition technologies are susceptible to a range of prejudices exacerbated by misuse in the field. For example, a report from Georgetown Law’s Center on Privacy and Technology details how police feed facial recognition software flawed data, including composite sketches and pictures of celebrities who share physical features with suspects.
A virtual army of 4,000 doglike robots was used to train an algorithm capable of enhancing the legwork of real-world robots, according to an initial report from Wired. And new tricks learned in the simulation could soon see execution in a neighborhood near you.
While undergoing training, the robots mastered the up-and downstairs walk without too much struggle. But slopes threw them for a curve. Few could grasp the essentials of sliding down a slope. But, once the final algorithm was moved to a real-world version of ANYmal, the four-legged doglike robot with sensors equipped in its head and a detachable robot arm successfully navigated blocks and stairs, but had issues working at higher speeds.
Full Story:
Vertical Aerospace has already collected somewhere in the region of 1,000 orders for their VA-X4 VTOL craft. This is a piloted electric, low emission eVTOL craft that can carry up to four passengers and a pilot. This air taxi is capable of flying at speeds of 200 mph (174 knots) and has a range of more than 100 miles (160 km).
Being all-electric, it is near-silent during flight, offers a low-carbon solution to flying, and has a relatively low cost per passenger mile.
Vertical Aerospace’s VA-X4 also makes use of the latest in advanced avionics — some of which are used to control the world’s only supersonic VTOL aircraft, the F-35 fighter. Such sophisticated control systems enable the eVTOL tax to fly with some high level of automation and reduced pilot workload.
In the latest episode of MIT Technology Review’s podcast “In Machines We Trust,” we asked career and job-matching experts for practical tips on how to succeed in a job market increasingly influenced by artificial intelligence.
Once you optimize your résumé, you may want to practice interviewing with an AI too.
Jeffrey Shainline is a physicist at NIST. Please support this podcast by checking out our sponsors:
- Stripe: https://stripe.com.
- Codecademy: https://codecademy.com and use code LEX to get 15% off.
- Linode: https://linode.com/lex to get $100 free credit.
- BetterHelp: https://betterhelp.com/lex to get 10% off.
Note: Opinions expressed by Jeff do not represent NIST.
EPISODE LINKS:
Jeff’s Website: http://www.shainline.net.
Jeff’s Google Scholar: https://scholar.google.com/citations?user=rnHpY3YAAAAJ
Jeff’s NIST Page: https://www.nist.gov/people/jeff-shainline.
PODCAST INFO:
Podcast website: https://lexfridman.com/podcast.
Apple Podcasts: https://apple.co/2lwqZIr.
Spotify: https://spoti.fi/2nEwCF8
RSS: https://lexfridman.com/feed/podcast/
Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41
OUTLINE:
0:00 — Introduction.
0:44 — How are processors made?
20:02 — Are engineers or physicists more important.
22:31 — Super-conductivity.
38:18 — Computation.
42:55 — Computation vs communication.
46:36 — Electrons for computation and light for communication.
57:19 — Neuromorphic computing.
1:22:11 — What is NIST?
1:25:28 — Implementing super-conductivity.
1:33:08 — The future of neuromorphic computing.
1:52:41 — Loop neurons.
1:58:57 — Machine learning.
2:13:23 — Cosmological evolution.
2:20:32 — Cosmological natural selection.
2:37:53 — Life in the universe.
2:45:40 — The rare Earth hypothesis.
SOCIAL:
Facebook has announced some exciting connectivity technologies that will enable the company to provide access to fast and affordable internet service to the next billion people as well as enhance existing infrastructure projects.
The company said that Facebook Connectivity has helped provide quality internet connectivity to over 500M people since 2013. Now, the company aims to enable affordable, high-quality connectivity for another one billion people at less cost and with greater speed by leveraging emerging technologies.
Commenting on the new connectivity technologies during the unveiling, Dan Rabinovitsj, VP of Facebook Connectivity said: “We have seen that economies flourish when there is widely accessible internet for individuals and businesses.”
It sounds like a scene from a spy thriller. An attacker gets through the IT defenses of a nuclear power plant and feeds it fake, realistic data, tricking its computer systems and personnel into thinking operations are normal. The attacker then disrupts the function of key plant machinery, causing it to misperform or break down. By the time system operators realize they’ve been duped, it’s too late, with catastrophic results.
The scenario isn’t fictional; it happened in 2,010 when the Stuxnet virus was used to damage nuclear centrifuges in Iran. And as ransomware and other cyberattacks around the world increase, system operators worry more about these sophisticated “false data injection” strikes. In the wrong hands, the computer models and data analytics—based on artificial intelligence—that ensure smooth operation of today’s electric grids, manufacturing facilities, and power plants could be turned against themselves.
Purdue University’s Hany Abdel-Khalik has come up with a powerful response: To make the computer models that run these cyberphysical systems both self-aware and self-healing. Using the background noise within these systems’ data streams, Abdel-Khalik and his students embed invisible, ever-changing, one-time-use signals that turn passive components into active watchers. Even if an attacker is armed with a perfect duplicate of a system’s model, any attempt to introduce falsified data will be immediately detected and rejected by the system itself, requiring no human response.
If you follow the latest trends in the tech industry, you probably know that there’s been a fair amount of debate about what the next big thing is going to be. Odds-on favorite for many has been augmented reality (AR) glasses, while others point to fully autonomous cars, and a few are clinging to the potential of 5G. With the surprise debut of Amazon’s Astro a few weeks back, personal robotic devices and digital companions have also thrown their hat into the ring.
However, while there has been little agreement on exactly what the next thing is, there seems to be little disagreement that whatever it turns out to be, it will be somehow powered, enabled, or enhanced by artificial intelligence (AI). Indeed, the fact that AI and machine learning (ML) are our future seems to be a foregone conclusion.
Yet, if we do an honest assessment of where some of these technologies actually stand on a functionality basis versus initial expectations, it’s fair to argue that the results have been disappointing on many levels. In fact, if we extend that thought process out to what AI/ML were supposed to do for us overall, then we start to come to a similarly disappointing conclusion.