Toggle light / dark theme

China has a new plan for judging the safety of generative AI—and it’s packed with details

A new proposal spells out the very specific ways companies should evaluate AI security and enforce censorship in AI models.

Ever since the Chinese government passed a law on generative AI back in July, I’ve been wondering how exactly China’s censorship machine would adapt for the AI era.

Last week we got some clarity about what all this may look like in practice.

How Meta and AI companies recruited striking actors to train AI

Hollywood actors are on strike over concerns about the use of AI, but for as little as $300, Meta and a company called Realeyes hired them to make avatars appear more human.

One evening in early September, T, a 28-year-old actor who asked to be identified by his first initial, took his seat in a rented Hollywood studio space in front of three cameras, a director, and a producer for a somewhat unusual gig.

The two-hour shoot produced footage that was not meant to be viewed by the public—at least, not a human public.

Scientists craft energy-efficient AI chip inspired by the brain

Dubbed NorthPole, it excels in terms of performance, energy, and area efficiency.

Artificial intelligence is an energy vampire that runs on substantial computational power. Running AI applications like behavior monitoring, facial recognition software, or live object tracking in real-time, a computing system with faster and more accurate inferences is required. For this to happen, a large AI model must work closely with the source of data.

This problem of moving large amounts of data between compute and memory started with one of the earliest electronic computers, the Electronic Discrete Variable Automatic Computer (EDVAC). The compute and memory of the system were based on differing technologies and had to be operated separately by necessity.

NVIDIA and Foxconn partner up to build ‘AI factories’ for future EVs

This also means faster robotics and self-driving cars.

Foxconn, the largest producer of iPhones, is joining hands with the biggest chipmaker in the world, NVIDIA, to develop artificial intelligence factories that will power a range of applications like self-driving cars, more generative AI tools, and robotic systems, said a press release.

Dubbed AI factories, they are data centers that will power a wide range of applications, including the digitalization of manufacturing and inspection workflows, the development of AI-powered electric vehicle and robotics platforms, and language-based generative AI services.

Oxford researchers’ photonic-electronic AI chip puts apps on steroids

The team estimates that their hardware can outperform the best electronic processors by a factor of 100 in terms of energy efficiency and compute density.

A team of scientists from Oxford University and their partners from Germany and the UK have developed a new kind of AI hardware that uses light to process three-dimensional (3D) data. Based on integrated photonic-electronic chips, the hardware can perform complex calculations in parallel using different wavelengths and radio frequencies of light. The team claims their hardware can boost the data processing speed and efficiency for AI tasks by several orders of magnitude.


AI computing and processing power

The research published today in the journal Nature Photonics addresses the challenge of meeting modern AI applications’ increasing demand for computing power. The conventional computer chips, which rely on electronics, need help to keep up with the pace of AI innovation, which requires doubling the processing power every 3.5 months. The team says that using light instead of electronics offers a new way of computing that can overcome this bottleneck.

2075: When Superintelligent AI Takes Over

Are you worried about the future of AI? In this video, we’ll look at a sci-fi scenario where a superintelligent AI has taken over the planet in 2075 and what that might mean for our future.

Ultimately, we need to be prepared for the future, that means being aware of superintelligent AI and how this future might unfold. So check out this video and leave your comments below.

https://www.grayscott.com.
Twitter: https://twitter.com/grayscott.
Facebook: https://www.facebook.com/futuristgrayscott/

Watch my other videos.
The Simulated Future: https://www.youtube.com/watch?v=pX9FY… igital Twin: https://www.youtube.com/watch?v=RjJzC… onscious Machines: https://www.youtube.com/watch?v=qtq1G… ranshumanism: https://www.youtube.com/watch?v=D8lE–… ream Recording: https://www.youtube.com/watch?v=33VoQ… uantified Self: https://www.youtube.com/watch?v=pMHDo… he future is a portal inward: https://www.youtube.com/watch?v=GpfwI… ray Scott is a futurist, philosopher, and artist. Gray is frequently interviewed by the Discovery Channel, History Channel, Forbes, CBS News, Vanity Fair, VICE MOTHERBOARD, Fast Company, The Washington Post, and SingularityHub.
Digital Twin:
Conscious Machines:
Transhumanism:
Dream Recording:
Quantified Self:
The future is a portal inward:

Gray Scott is a futurist, philosopher, and artist. Gray is frequently interviewed by the Discovery Channel, History Channel, Forbes, CBS News, Vanity Fair, VICE MOTHERBOARD, Fast Company, The Washington Post, and SingularityHub.

IBM’s NorthPole chip runs AI-based image recognition 22 times faster than current chips

A large team of computer scientists and engineers at IBM Research has developed a dedicated computer chip that is able to run AI-based image recognition apps 22 times as fast as chips that are currently on the market.

In their paper published in the journal Science, the group describes the ideas that went into developing the , how it works and how well it performed when tested. Subramanian Iyer and Vwani Roychowdhury, both at the University of California, Los Angeles, have published a Perspective piece in the same journal issue, giving an in-depth analysis of the work by the team in California.

As AI-powered applications become mainstream tools used by professionals and amateurs alike, scientists continue work to make them better. One way to do that, Iyer and Roychowdhury note, is to move toward an “edge” computer system in which the data is physically closer to the AI applications that are using them.

Scientists Pump Up Lab-Grown Muscles for Robots With a New Magnetic Workout

Unfortunately, these precise cell arrangements are also why artificial muscles are difficult to recreate in the lab. Despite being soft, squishy, and easily damaged, our muscles can perform incredible feats—adapt to heavy loads, sense the outside world, and rebuild after injury. A main reason for these superpowers is alignment—that is, how muscle cells orient to form stretchy fibers.

Now, a new study suggests that the solution to growing better lab-grown muscles may be magnets. Led by Dr. Ritu Raman at the Massachusetts Institute of Technology (MIT), scientists developed a magnetic hydrogel “sandwich” that controls muscle cell orientation in a lab dish. By changing the position of the magnets, the muscle cells aligned into fibers that contracted in synchrony as if they were inside a body.

The whole endeavor sounds rather Frankenstein. But lab-grown tissues could one day be grafted into people with heavily damaged muscles—either from inherited diseases or traumatic injuries—and restore their ability to navigate the world freely. Synthetic muscles could also coat robots, providing them with human-like senses, flexible motor control, and the ability to heal after inevitable scratches and scrapes.