Toggle light / dark theme

A prominent engineer in the AI field believes robots can be designed to support humans not replace them.

A prominent engineer in AI claims humans and robots can work together peacefully if they can build a “bond of trust.” The claim is a far cry from the doomsday scenarios painted by many experts in the field.

Tariq Iqbal, an assistant professor of systems engineering and computer science in the University of Virginia’s School of Engineering and Applied Science, says he strives for machines to work with people, not replace them.

Generative AI has been front and centre of the news for the last nine months and attention is often on existential risks, copyright claims or suspicions around deepfakes. However, there are a growing number of more positive ways it can be integrated into businesses.

One of those areas is customer service. The Samsung Neon people were a good example of what could be achieved with embodied AI. Samsung created an impressive suite of customer service agents whose profiles could match those of customers in need of help.


I wanted an avatar that was a bit ‘uncanny’, so that it had some resemblance to my real physical self but looked quite artificial too.

Go to https://get.atlasvpn.com/ModernMuscle to get a 3-year plan for just $1.83 a month. It’s risk free with Atlas’s 30 day money back guarantee!

Today we’re going explore the unthinkable: How would the United States respond during a Nuclear conflict?

When we first came up with this concept, we aimed to cover the America’s Nuclear Triad and it’s Russian Nuclear War Plan in one concise video, but one video turned into three. So here’s full version of “How Would the United States Fight a Nuclear War?” as it was originally intended. Enjoy!

Sources:

AI had its nuclear bomb threshold. The biggest thing that happens to human technology maybe since the splitting of the atom.

A conversation with Science Fiction author and a NASA consultant David Brin about the existential risks of AI and what approach we can take to address these risks.


David Brin’s advice for new authors.

Decision theorist Eliezer Yudkowsky has a simple message: superintelligent AI could probably kill us all. So the question becomes: Is it possible to build powerful artificial minds that are obedient, even benevolent? In a fiery talk, Yudkowsky explores why we need to act immediately to ensure smarter-than-human AI systems don’t lead to our extinction.

If you love watching TED Talks like this one, become a TED Member to support our mission of spreading ideas: https://ted.com/membership.

Follow TED!
Twitter: https://twitter.com/TEDTalks.
Instagram: https://www.instagram.com/ted.
Facebook: https://facebook.com/TED
LinkedIn: https://www.linkedin.com/company/ted-conferences.
TikTok: https://www.tiktok.com/@tedtoks.

The TED Talks channel features talks, performances and original series from the world’s leading thinkers and doers. Subscribe to our channel for videos on Technology, Entertainment and Design — plus science, business, global issues, the arts and more. Visit https://TED.com to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more.

“Some men just want to watch the world burn.” Zachary Kallenborn discusses acts of existential terrorism, such as the Tokyo subway sarin attack by Aum Shinrikyo in 1995, which killed or injured over 1,000 people.

Zachary kallenborn is a policy fellow in the center for security policy studies at george mason university, research affiliate in unconventional weapons and technology at START, and senior risk management consultant at the ABS group.

Zachary has an MA in Nonproliferation and Terrorism Studies from Middlebury Institute of International Studies, and a BS in Mathematics and International Relations from the University of Puget Sound.

His work has been featured in numerous international media outlets including the New York Times, Slate, NPR, Forbes, New Scientist, WIRED, Foreign Policy, the BBC, and many others.

This existential threat could even come as early as, say, 2026. Or might even be a good thing, but whatever the Singularity exactly is, although it’s uncertain in nature, it’s becoming clearer in timing and much closer than most predicted.

AI is nevertheless hard to predict, but many agree with me that with GPT-4 we’re close to AGI (artificial general intelligence) already.

The first “AI incident” almost caused global nuclear war. More recent AI-enabled malfunctions, errors, fraud, and scams include deepfakes used to influence politics, bad health information from chatbots, and self-driving vehicles that are endangering pedestrians.

The worst offenders, according to security company Surfshark, are Tesla, Facebook, and OpenAI, with 24.5% of all known AI incidents so far.

In 1983, an automated system in the Soviet Union thought it detected incoming nuclear missiles from the United States, almost leading to global conflict. That’s the first incident in Surfshark’s report (though it’s debatable whether an automated system from the 1980s counts specifically as artificial intelligence). In the most recent incident, the National Eating Disorders Association (NEDA) was forced to shut down Tessa, its chatbot, after Tessa gave dangerous advice to people seeking help for eating disorders. Other recent incidents include a self-driving Tesla failing to notice a pedestrian and then breaking the law by not yielding to a person in a crosswalk, and a Jefferson Parish resident being wrongfully arrested by Louisiana police after a facial recognition system developed by Clearview AI allegedly mistook him for another individual.

Taiwan was on high alert after two Russian warships entered its waters. Taiwan is used to incursions by China, not Russia. It marks a new flare-up in East Asia. Moscow then doubled down by releasing footage of a military drill in the Sea of Japan. East Asia is becoming a powder keg.

The region already deals with tensions between North Korea, South Korea & Japan. And now the US is trying to send a message to Pyongyang by having its largest nuclear submarine visit South Korea.

Firstpost | world news | vantage.

#firstpost #vantageonfirstpost #worldnews.