Toggle light / dark theme

Dean’s appearance at TED comes during a time when critics—including current Google employees —are calling for greater scrutiny over big tech’s control over the world’s AI systems. Among those critics was one who spoke right after Dean at TED. Coder Xiaowei R. Wang, creative director of the indie tech magazine Logic, argued for community-led innovations. “Within AI there is only a case for optimism if people and communities can make the case themselves, instead of people like Jeff Dean and companies like Google making the case for them, while shutting down the communities [that] AI for Good is supposed to help,” she said. (AI for Good is a movement that seeks to orient machine learning toward solving the world’s most pressing social equity problems.)

TED curator Chris Andersen and Greg Brockman, co-founder of the AI ethics research group Open AI, also wrestled with the unintended consequences of powerful machine learning systems at the end of the conference. Brockman described a scenario in which humans serve as moral guides to AI. “We can teach the system the values we want, as we would a child,” he said. “It’s an important but subtle point. I think you do need the system to learn a model of the world. If you’re teaching a child, they need to learn what good and bad is.”

There also is room for some gatekeeping to be done once the machines have been taught, Anderson suggested. “One of the key issues to keeping this thing on track is to very carefully pick the people who look at the output of these unsupervised learning systems,” he said.

NASA’s Perseverance Mars rover recently attempted its first-ever sample collection of the Martian surface on August 6. However, data shows that while the rover’s drill successfully drilled into the surface, no regolith was collected in the sample tube.

Meanwhile, as Perseverance was preparing for the sample collection event, a team of researchers using ESA’s Mars Express orbiter found evidence that previously thought of lakes of water underneath Mars’ south pole might actually be made of clay.

Perseverance’s sample collection failure

On August 6 Perseverance’s 2-meter robotic arm lowered to the Martian surface, where a drill located at the end of the arm began carving into the local rock.

Lurking in the background of the quest for true quantum supremacy hangs an awkward possibility – hyper-fast number crunching tasks based on quantum trickery might just be a load of hype.

Now, a pair of physicists from École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland and Columbia University in the US have come up with a better way to judge the potential of near-term quantum devices – by simulating the quantum mechanics they rely upon on more traditional hardware.

Their study made use of a neural network developed by EPFL’s Giuseppe Carleo and his colleague Matthias Troyer back in 2,016 using machine learning to come up with an approximation of a quantum system tasked with running a specific process.

Natural language processing continues to find its way into unexpected corners. This time, it’s phishing emails. In a small study, researchers found that they could use the deep learning language model GPT-3, along with other AI-as-a-service platforms, to significantly lower the barrier to entry for crafting spearphishing campaigns at a massive scale.

Researchers have long debated whether it would be worth the effort for scammers to train machine learning algorithms that could then generate compelling phishing messages. Mass phishing messages are simple and formulaic, after all, and are already highly effective. Highly targeted and tailored “spearphishing” messages are more labor intensive to compose, though. That’s where NLP may come in surprisingly handy.

At the Black Hat and Defcon security conferences in Las Vegas this week, a team from Singapore’s Government Technology Agency presented a recent experiment in which they sent targeted phishing emails they crafted themselves and others generated by an AI-as-a-service platform to 200 of their colleagues. Both messages contained links that were not actually malicious but simply reported back clickthrough rates to the researchers. They were surprised to find that more people clicked the links in the AI-generated messages than the human-written ones—by a significant margin.

No, it’s not forbidden to innovate, quite the opposite, but it’s always risky to do something different from what people are used to. Risk is the middle name of the bold, the builders of the future. Those who constantly face resistance from skeptics. Those who fail eight times and get up nine.

(Credit: Adobe Stock)

Fernando Pessoa’s “First you find it strange. Then you can’t get enough of it.” contained intolerable toxicity levels for Salazar’s Estado Novo (Portugal). When the level of difference increases, censorship follows. You can’t censor censorship (or can you?) when, deep down, it’s a matter of fear of difference. Yes, it’s fear! Fear of accepting/facing the unknown. Fear of change.

✅ Instagram: https://www.instagram.com/pro_robots.

You’re on PRO Robotics, and in this video we present the July 2,021 news digest. New robots, drones and drones, artificial intelligence and military robots, news from Elon Musk and Boston Dynamics. All the most interesting high-tech news for July in this Issue. Be sure to watch the video to the end and write in the comments, which news you are most interested in?

0:00 Announcement of the first part of the issue.
0:23 Home robot assistants and other.
10:50 Boston Dynamics news, Tesla Model S Plaid spontaneous combustion, Elon Musk’s new rocket, Richard Brandson.
20:25 WAIC 2,021 Robotics Exhibition. New robots, drones, cities of the future.
33:10 Artificial intelligence to program robots.

#prorobots #robots #robot #future technologies #robotics.