Toggle light / dark theme

As more private data is stored and shared digitally, researchers are exploring new ways to protect data against attacks from bad actors. Current silicon technology exploits microscopic differences between computing components to create secure keys, but artificial intelligence (AI) techniques can be used to predict these keys and gain access to data. Now, Penn State researchers have designed a way to make the encrypted keys harder to crack.

Led by Saptarshi Das, assistant professor of engineering science and mechanics, the researchers used graphene — a layer of carbon one atom thick — to develop a novel low-power, scalable, reconfigurable hardware security device with significant resilience to AI attacks. They published their findings in Nature Electronics today (May 10).

“There has been more and more breaching of private data recently,” Das said. “We developed a new hardware security device that could eventually be implemented to protect these data across industries and sectors.”

This special edition show is sponsored by Numerai, please visit them here with our sponsor link (we would really appreciate it) http://numer.ai/mlst.

Prof. Karl Friston recently proposed a vision of artificial intelligence that goes beyond machines and algorithms, and embraces humans and nature as part of a cyber-physical ecosystem of intelligence. This vision is based on the principle of active inference, which states that intelligent systems can learn from their observations and act on their environment to reduce uncertainty and achieve their goals. This leads to a formal account of collective intelligence that rests on shared narratives and goals.

To realize this vision, Friston suggests developing a shared hyper-spatial modelling language and transaction protocol, as well as novel methods for measuring and optimizing collective intelligence. This could harness the power of artificial intelligence for the common good, without compromising human dignity or autonomy. It also challenges us to rethink our relationship with technology, nature, and each other, and invites us to join a global community of sense-makers who are curious about the world and eager to improve it.

Pod version: https://podcasters.spotify.com/pod/show/machinelearningstree…on-e208f50
Support us! https://www.patreon.com/mlst.
MLST Discord: https://discord.gg/aNPkGUQtc5

TOC:
Intro [00:00:00]
Numerai (Sponsor segment) [00:07:10]
Designing Ecosystems of Intelligence from First Principles (Friston et al) [00:09:48]
Information / Infosphere and human agency [00:18:30]
Intelligence [00:31:38]
Reductionism [00:39:36]
Universalism [00:44:46]
Emergence [00:54:23]
Markov blankets [01:02:11]
Whole part relationships / structure learning [01:22:33]
Enactivism [01:29:23]
Knowledge and Language [01:43:53]
ChatGPT [01:50:56]
Ethics (is-ought) [02:07:55]
Can people be evil? [02:35:06]
Ethics in Al, subjectiveness [02:39:05]
Final thoughts [02:57:00]

References:

Lex Fridman Podcast full episode: https://www.youtube.com/watch?v=AaTRHFaaPG8
Please support this podcast by checking out our sponsors:
- Linode: https://linode.com/lex to get $100 free credit.
- House of Macadamias: https://houseofmacadamias.com/lex and use code LEX to get 20% off your first order.
- InsideTracker: https://insidetracker.com/lex to get 20% off.

GUEST BIO:
Eliezer Yudkowsky is a researcher, writer, and philosopher on the topic of superintelligent AI.

PODCAST INFO:
Podcast website: https://lexfridman.com/podcast.
Apple Podcasts: https://apple.co/2lwqZIr.
Spotify: https://spoti.fi/2nEwCF8
RSS: https://lexfridman.com/feed/podcast/
Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41

SOCIAL:
- Twitter: https://twitter.com/lexfridman.
- LinkedIn: https://www.linkedin.com/in/lexfridman.
- Facebook: https://www.facebook.com/lexfridman.
- Instagram: https://www.instagram.com/lexfridman.
- Medium: https://medium.com/@lexfridman.
- Reddit: https://reddit.com/r/lexfridman.
- Support on Patreon: https://www.patreon.com/lexfridman

Won’t that just make enemies of AI?


One of the world’s loudest artificial intelligence critics has issued a stark call to not only put a pause on AI but to militantly put an end to it — before it ends us instead.

In an op-ed for Time magazine, machine learning researcher Eliezer Yudkowsky, who has for more than two decades been warning about the dystopian future that will come when we achieve Artificial General Intelligence (AGI), is once again ringing the alarm bells.

Yudkowsky said that while he lauds the signatories of the Future of Life Institute’s recent open letter — which include SpaceX CEO Elon Musk, Apple co-founder Steve Wozniak, and onetime presidential candidate Andrew Yang — calling for a six-month pause on AI advancement to take stock, he himself didn’t sign it because it doesn’t go far enough.

As deepfake videos become more widespread, counter programs that could make the internet a safer place are in development, too.

Greg Tarr, a 17-year-old student at Bandon Grammar School in County Cork, Ireland, has been declared the winner of the 2021 BT Young Scientist & Technologist of the Year (BTYSTE) award for his project “Towards Deepfake Detection”, per a press release.

The world has been learning an awful lot about artificial intelligence lately, thanks to the arrival of eerily human-like chatbots.

Less noticed, but just as important: Researchers are learning a great deal about us – with the help of AI.

AI is helping scientists decode how neurons in our brains communicate, and explore the nature of cognition. This new research could one day lead to humans connecting with computers merely by thinking–as opposed to typing or voice commands. But there is a long way to go before such visions become reality.

Recent public interest in tools like ChatGPT has raised an old question in the artificial intelligence community: is artificial general intelligence (in this case, AI that performs at human level) achievable? An online preprint this week has added to the hype, suggesting the latest advanced large language model, GPT-4, is at the early stages of artificial general intelligence (AGI) as it’s exhibiting “sparks of intelligence”.