Toggle light / dark theme

Meta creates new, ‘inclusive’ AI training dataset so bots can be fair

It could be a solid step against inaccurate, racist, and sexist responses from the likes of OpenAI’s ChatGPT and Google’s Bard.

Meta hopes to assist AI researchers in making their tools and procedures more universally inclusive, with the launch of Casual Conversations v2, according to a statement from the firm on March 9.

The vast new dataset, which includes face-to-face video clips from a broad spectrum of human participants across varied geographic, cultural, racial, and physical demographics, serves as an upgrade to its 2021 AI audio-visual training dataset.

Making Deepfakes Gets Cheaper and Easier Thanks to A.I.

It wouldn’t be completely out of character for Joe Rogan, the comedian turned podcaster, to endorse a “libido-boosting” coffee brand for men.

But when a video circulating on TikTok recently showed Mr. Rogan and his guest, Andrew Huberman, hawking the coffee, some eagle-eyed viewers were shocked — including Dr. Huberman.

“Yep that’s fake,” Dr. Huberman wrote on Twitter after seeing the ad, in which he appears to praise the coffee’s testosterone-boosting potential, even though he never did.

The Limits of Computing: Why Even in the Age of AI, Some Problems Are Just Too Difficult

Empowered by artificial intelligence technologies, computers today can engage in convincing conversations with people, compose songs, paint paintings, play chess and go, and diagnose diseases, to name just a few examples of their technological prowess.

These successes could be taken to indicate that computation has no limits. To see if that’s the case, it’s important to understand what makes a computer powerful.

There are two aspects to a computer’s power: the number of operations its hardware can execute per second and the efficiency of the algorithms it runs. The hardware speed is limited by the laws of physics. Algorithms—basically sets of instructions —are written by humans and translated into a sequence of operations that computer hardware can execute. Even if a computer’s speed could reach the physical limit, computational hurdles remain due to the limits of algorithms.

New exhibition in US depicts a post-apocalyptic world destroyed by AI

Have you ever wondered what life would be like if artificial intelligence became too powerful?

A new exhibition titled the ‘Misalignment Museum’ has opened to the public in San Francisco — the beating heart of the tech revolution — looks to explore just that, and features AI artworks meant to help visitors think about the potential dangers of artificial intelligence.

The exhibits in this temporary show mix the disturbing with the comic, and this first display has AI give pithy observations to visitors that cross into its line of vision.

The Mathematics of Machine Learning

Check out the Machine Learning Course on Coursera: https://click.linksynergy.com/deeplink?id=vFuLtrCrRW4&mid=40…p_ml_nov18

STEMerch Store: https://stemerch.com/
Support the Channel: https://www.patreon.com/zachstar.
PayPal(one time donation): https://www.paypal.me/ZachStarYT

Instagram: https://www.instagram.com/zachstar/
Twitter: https://twitter.com/ImZachStar.
Join Facebook Group: https://www.facebook.com/groups/majorprep/

►My Setup:
Space Pictures: https://amzn.to/2CC4Kqj.
Camera: https://amzn.to/2RivYu5
Mic: https://amzn.to/2BLBkEj.
Tripod: https://amzn.to/2RgMTNL
Equilibrium Tube: https://amzn.to/2SowDrh.

►Check out the MajorPrep Amazon Store: https://www.amazon.com/shop/zachstar?tag=lifeboatfound-20

Deep Learning Basics: Introduction and Overview

An introductory lecture for MIT course 6.S094 on the basics of deep learning including a few key ideas, subfields, and the big picture of why neural networks have inspired and energized an entire new generation of researchers. For more lecture videos on deep learning, reinforcement learning (RL), artificial intelligence (AI & AGI), and podcast conversations, visit our website or follow TensorFlow code tutorials on our GitHub repo.

INFO:
Website: https://deeplearning.mit.edu.
GitHub: https://github.com/lexfridman/mit-deep-learning.
Slides: http://bit.ly/deep-learning-basics-slides.
Playlist: http://bit.ly/deep-learning-playlist.
Blog post: https://link.medium.com/TkE476jw2T

OUTLINE:
0:00 — Introduction.
0:53 — Deep learning in one slide.
4:55 — History of ideas and tools.
9:43 — Simple example in TensorFlow.
11:36 — TensorFlow in one slide.
13:32 — Deep learning is representation learning.
16:02 — Why deep learning (and why not)
22:00 — Challenges for supervised learning.
38:27 — Key low-level concepts.
46:15 — Higher-level methods.
1:06:00 — Toward artificial general intelligence.

CONNECT:
- If you enjoyed this video, please subscribe to this channel.
- Twitter: https://twitter.com/lexfridman.
- LinkedIn: https://www.linkedin.com/in/lexfridman.
- Facebook: https://www.facebook.com/lexfridman.
- Instagram: https://www.instagram.com/lexfridman

But what is a neural network? | Chapter 1, Deep learning

What are the neurons, why are there layers, and what is the math underlying it?
Help fund future projects: https://www.patreon.com/3blue1brown.
Written/interactive form of this series: https://www.3blue1brown.com/topics/neural-networks.

Additional funding for this project provided by Amplify Partners.

Typo correction: At 14 minutes 45 seconds, the last index on the bias vector is n, when it’s supposed to in fact be a k. Thanks for the sharp eyes that caught that!

For those who want to learn more, I highly recommend the book by Michael Nielsen introducing neural networks and deep learning: https://goo.gl/Zmczdy.

There are two neat things about this book. First, it’s available for free, so consider joining me in making a donation Nielsen’s way if you get something out of it. And second, it’s centered around walking through some code and data which you can download yourself, and which covers the same example that I introduce in this video. Yay for active learning!
https://github.com/mnielsen/neural-networks-and-deep-learning.

I also highly recommend Chris Olah’s blog: http://colah.github.io/

Deep Language Models are getting increasingly better

Deep learning has made significant strides in text generation, translation, and completion in recent years. Algorithms trained to predict words from their surrounding context have been instrumental in achieving these advancements. However, despite access to vast amounts of training data, deep language models still need help to perform tasks like long story generation, summarization, coherent dialogue, and information retrieval. These models have been shown to need help capturing syntax and semantic properties, and their linguistic understanding needs to be more superficial. Predictive coding theory suggests that the brain of a human makes predictions over multiple timescales and levels of representation across the cortical hierarchy. Although studies have previously shown evidence of speech predictions in the brain, the nature of predicted representations and their temporal scope remain largely unknown. Recently, researchers analyzed the brain signals of 304 individuals listening to short stories and found that enhancing deep language models with long-range and multi-level predictions improved brain mapping.

The results of this study revealed a hierarchical organization of language predictions in the cortex. These findings align with predictive coding theory, which suggests that the brain makes predictions over multiple levels and timescales of expression. Researchers can bridge the gap between human language processing and deep learning algorithms by incorporating these ideas into deep language models.

The current study evaluated specific hypotheses of predictive coding theory by examining whether cortical hierarchy predicts several levels of representations, spanning multiple timescales, beyond the neighborhood and word-level predictions usually learned in deep language algorithms. Modern deep language models and the brain activity of 304 people listening to spoken tales were compared. It was discovered that the activations of deep language algorithms supplemented with long-range and high-level predictions best describe brain activity.

The Next Frontier of Robotics: The Race to Develop a Humanoid General Purpose Robot!

There is a competition among technology companies to develop a humanoid robot that can perform various tasks, and one particular company, “Figure,” is at the forefront of this race.

A humanoid general-purpose robot is a robot that can mimic human actions and interact with the environment in a human-like way. This type of robot has the potential to perform various tasks, such as cooking, cleaning, and assisting people with disabilities.

The race to develop such robots is driven by the potential to revolutionize various industries, including manufacturing, healthcare, and retail. A successful humanoid robot could replace human workers in hazardous or repetitive tasks, increase productivity, and reduce costs.

The fact that “Figure” is leading this race suggests that they have made significant progress in developing a humanoid general-purpose robot. It is possible that they have developed new technology or software that gives them an advantage over their competitors.

Overall, it implies that there is intense competition among tech companies to develop the next generation of robots, and “Figure” is one of the frontrunners in this race.

https://www.figure.ai/