Toggle light / dark theme

New exhibition in US depicts a post-apocalyptic world destroyed by AI

Have you ever wondered what life would be like if artificial intelligence became too powerful?

A new exhibition titled the ‘Misalignment Museum’ has opened to the public in San Francisco — the beating heart of the tech revolution — looks to explore just that, and features AI artworks meant to help visitors think about the potential dangers of artificial intelligence.

The exhibits in this temporary show mix the disturbing with the comic, and this first display has AI give pithy observations to visitors that cross into its line of vision.

The Mathematics of Machine Learning

Check out the Machine Learning Course on Coursera: https://click.linksynergy.com/deeplink?id=vFuLtrCrRW4&mid=40…p_ml_nov18

STEMerch Store: https://stemerch.com/
Support the Channel: https://www.patreon.com/zachstar.
PayPal(one time donation): https://www.paypal.me/ZachStarYT

Instagram: https://www.instagram.com/zachstar/
Twitter: https://twitter.com/ImZachStar.
Join Facebook Group: https://www.facebook.com/groups/majorprep/

►My Setup:
Space Pictures: https://amzn.to/2CC4Kqj.
Camera: https://amzn.to/2RivYu5
Mic: https://amzn.to/2BLBkEj.
Tripod: https://amzn.to/2RgMTNL
Equilibrium Tube: https://amzn.to/2SowDrh.

►Check out the MajorPrep Amazon Store: https://www.amazon.com/shop/zachstar?tag=lifeboatfound-20

Deep Learning Basics: Introduction and Overview

An introductory lecture for MIT course 6.S094 on the basics of deep learning including a few key ideas, subfields, and the big picture of why neural networks have inspired and energized an entire new generation of researchers. For more lecture videos on deep learning, reinforcement learning (RL), artificial intelligence (AI & AGI), and podcast conversations, visit our website or follow TensorFlow code tutorials on our GitHub repo.

INFO:
Website: https://deeplearning.mit.edu.
GitHub: https://github.com/lexfridman/mit-deep-learning.
Slides: http://bit.ly/deep-learning-basics-slides.
Playlist: http://bit.ly/deep-learning-playlist.
Blog post: https://link.medium.com/TkE476jw2T

OUTLINE:
0:00 — Introduction.
0:53 — Deep learning in one slide.
4:55 — History of ideas and tools.
9:43 — Simple example in TensorFlow.
11:36 — TensorFlow in one slide.
13:32 — Deep learning is representation learning.
16:02 — Why deep learning (and why not)
22:00 — Challenges for supervised learning.
38:27 — Key low-level concepts.
46:15 — Higher-level methods.
1:06:00 — Toward artificial general intelligence.

CONNECT:
- If you enjoyed this video, please subscribe to this channel.
- Twitter: https://twitter.com/lexfridman.
- LinkedIn: https://www.linkedin.com/in/lexfridman.
- Facebook: https://www.facebook.com/lexfridman.
- Instagram: https://www.instagram.com/lexfridman

But what is a neural network? | Chapter 1, Deep learning

What are the neurons, why are there layers, and what is the math underlying it?
Help fund future projects: https://www.patreon.com/3blue1brown.
Written/interactive form of this series: https://www.3blue1brown.com/topics/neural-networks.

Additional funding for this project provided by Amplify Partners.

Typo correction: At 14 minutes 45 seconds, the last index on the bias vector is n, when it’s supposed to in fact be a k. Thanks for the sharp eyes that caught that!

For those who want to learn more, I highly recommend the book by Michael Nielsen introducing neural networks and deep learning: https://goo.gl/Zmczdy.

There are two neat things about this book. First, it’s available for free, so consider joining me in making a donation Nielsen’s way if you get something out of it. And second, it’s centered around walking through some code and data which you can download yourself, and which covers the same example that I introduce in this video. Yay for active learning!
https://github.com/mnielsen/neural-networks-and-deep-learning.

I also highly recommend Chris Olah’s blog: http://colah.github.io/

Deep Language Models are getting increasingly better

Deep learning has made significant strides in text generation, translation, and completion in recent years. Algorithms trained to predict words from their surrounding context have been instrumental in achieving these advancements. However, despite access to vast amounts of training data, deep language models still need help to perform tasks like long story generation, summarization, coherent dialogue, and information retrieval. These models have been shown to need help capturing syntax and semantic properties, and their linguistic understanding needs to be more superficial. Predictive coding theory suggests that the brain of a human makes predictions over multiple timescales and levels of representation across the cortical hierarchy. Although studies have previously shown evidence of speech predictions in the brain, the nature of predicted representations and their temporal scope remain largely unknown. Recently, researchers analyzed the brain signals of 304 individuals listening to short stories and found that enhancing deep language models with long-range and multi-level predictions improved brain mapping.

The results of this study revealed a hierarchical organization of language predictions in the cortex. These findings align with predictive coding theory, which suggests that the brain makes predictions over multiple levels and timescales of expression. Researchers can bridge the gap between human language processing and deep learning algorithms by incorporating these ideas into deep language models.

The current study evaluated specific hypotheses of predictive coding theory by examining whether cortical hierarchy predicts several levels of representations, spanning multiple timescales, beyond the neighborhood and word-level predictions usually learned in deep language algorithms. Modern deep language models and the brain activity of 304 people listening to spoken tales were compared. It was discovered that the activations of deep language algorithms supplemented with long-range and high-level predictions best describe brain activity.

The Next Frontier of Robotics: The Race to Develop a Humanoid General Purpose Robot!

There is a competition among technology companies to develop a humanoid robot that can perform various tasks, and one particular company, “Figure,” is at the forefront of this race.

A humanoid general-purpose robot is a robot that can mimic human actions and interact with the environment in a human-like way. This type of robot has the potential to perform various tasks, such as cooking, cleaning, and assisting people with disabilities.

The race to develop such robots is driven by the potential to revolutionize various industries, including manufacturing, healthcare, and retail. A successful humanoid robot could replace human workers in hazardous or repetitive tasks, increase productivity, and reduce costs.

The fact that “Figure” is leading this race suggests that they have made significant progress in developing a humanoid general-purpose robot. It is possible that they have developed new technology or software that gives them an advantage over their competitors.

Overall, it implies that there is intense competition among tech companies to develop the next generation of robots, and “Figure” is one of the frontrunners in this race.

https://www.figure.ai/

These Are the Jobs Most Vulnerable to AI, Researchers Say

Wondering if artificial intelligence will be taking your job anytime soon? We’re sure we speak for a lot of folks when we say: same.

Considering that AI is literally designed to model human capabilities and thus automate human tasks, it’s a fair question — and one that a group of professors from New York University (NYU), Princeton, and the University of Pennsylvania (UPenn) may have just helped to shed a little bit of light on in a new paper, aptly titled “How Will Language Modelers like ChatGPT Affect Occupations and Industries?”

Though the paper has yet to be peer-reviewed, the results are fascinating, not to mention ominous — especially, of course, for the folks most at risk.

/* */