Toggle light / dark theme

Of the seven patterns of AI that represent the ways in which AI is being implemented, one of the most common is the recognition pattern. The main idea of the recognition pattern of AI is that we’re using machine learning and cognitive technology to help identify and categorize unstructured data into specific classifications. This unstructured data could be images, video, text, or even quantitative data. The power of this pattern is that we’re enabling machines to do the thing that our brains seem to do so easily: identify what we’re perceiving in the real world around us.

The recognition pattern is notable in that it was primarily the attempts to solve image recognition challenges that brought about heightened interest in deep learning approaches to AI, and helped to kick off this latest wave of AI investment and interest. The recognition pattern however is broader than just image recognition In fact, we can use machine learning to recognize and understand images, sound, handwriting, items, face, and gestures. The objective of this pattern is to have machines recognize and understand unstructured data. This pattern of AI is such a huge component of AI solutions because of its wide variety of applications.

The difference between structured and unstructured data is that structured data is already labelled and easy to interpret. However unstructured data is where most entities struggle. Up to 90% of an organization’s data is unstructured data. It becomes necessary for businesses to be able to understand and interpret this data and that’s where AI steps in. Whereas we can use existing query technology and informatics systems to gather analytic value from structured data, it is almost impossible to use those approaches with unstructured data. This is what makes machine learning such a potent tool when applied to these classes of problems.

Full episode with Ilya Sutskever (May 2020): https://www.youtube.com/watch?v=13CZPWmke6A
Clips channel (Lex Clips): https://www.youtube.com/lexclips
Main channel (Lex Fridman): https://www.youtube.com/lexfridman
(more links below)

Podcast full episodes playlist:

Podcasts clips playlist:

Podcast website:
https://lexfridman.com/ai

Podcast on Apple Podcasts (iTunes):
https://apple.co/2lwqZIr

Podcast on Spotify:

Hey all! I’ve recently made a video on how humans can one day become AI and how this can be good for individuals and humanity on its own! If you are interested, please give it a watch!


AI is oftentimes the ultimate bane of science-fiction. Artificial intelligence oftentimes becomes smarter than humanity and turns on us in sci-fi movies and films. However, what if people could become AI one day and use this to their advantage? And if so, would people choose to become AI? Here’s why I believe this may be possible and why people may actually voluntarily choose to become AI themselves- thus blurring the lines between computer science and biology.

Please follow our instagram at: https://www.instagram.com/the_futurist_tom/

For business inquires, please contact [email protected]

As impossible as it seems, it won’t be long before artificial intelligence is writing and creating films. As a lead up to this eventuality, here is a list of just some of the creative endeavors that AI has already accomplished:

• It is now a simple matter for AI to create new paintings after being shown a number of examples in a particular genre. For example, one AI computer was given hundreds of samples of 17th-century “Old Master” style paintings and was asked to create its own paintings. One of the paintings it came up with is titled “Portrait of Edmond De Belamy” and sold for a whopping $432,500 at a Christie’s auction.

• Another AI computer was fed hundreds of modern pop songs and came up with its own song named “Blue Jeans and Bloody Tears” complete with lyrics.

In 2033, people can be “uploaded” into virtual reality hotels run by 6 tech firms. Cash-strapped Nora lives in Brooklyn and works customer service for the luxurious “Lakeview” digital afterlife. When L.A. party-boy/coder Nathan’s self-driving car crashes, his high-maintenance girlfriend uploads him permanently into Nora’s VR world. Upload is created by Greg Daniels (The Office).

Wireless charging is already a thing (in smartphones, for example), but scientists are working on the next level of this technology that could deliver power over greater distances and to moving objects, such as cars.

Imagine cruising down the road while your electric vehicle gets charged, or having a robot that doesn’t lose battery life while it moves around a factory floor. That’s the sort of potential behind the newly developed technology from a team at Stanford University.

If you’re a long-time ScienceAlert reader, you may remember the same researchers first debuted the technology back in 2017. Now it’s been made more efficient, more powerful, and more practical – so it can hopefully soon be moved out of the lab.

Circa 2019


MIT’s Mini Cheetah robots are small quadrupedal robots capable of running, jumping, walking, and flipping.

In a recently published video, the tiny bots can be seen roaming, hopping, and marching around a field and playing with a soccer ball.

They’re not consumer products, but MIT hopes that the Mini Cheetah’s durable and modular design will make it an ideal tool for researchers.

Inspired by the biomechanics of cheetahs, researchers have developed a new type of soft robot that is capable of moving more quickly on solid surfaces or in the water than previous generations of soft robots. The new soft robotics are also capable of grabbing objects delicately—or with sufficient strength to lift heavy objects.

“Cheetahs are the fastest creatures on land, and they derive their and power from the flexing of their spines,” says Jie Yin, an assistant professor of mechanical and at North Carolina State University and corresponding author of a paper on the new soft robots.

“We were inspired by the cheetah to create a type of soft robot that has a spring-powered, ‘bistable’ spine, meaning that the robot has two stable states,” Yin says. “We can switch between these stable states rapidly by pumping air into channels that line the soft, silicone robot. Switching between the two states releases a significant amount of energy, allowing the robot to quickly exert force against the ground. This enables the robot to gallop across the surface, meaning that its feet leave the ground.