Get ready for an age when there is only one pilot in the cockpit. Because soon, there will be none.
Kool99/iStock.
The year began with the news of an AI system flying a F-16 fighter jet for over 17 hours. That a computer system can fly a tactical aircraft without any human intervention speaks volumes about how far the technology has come today.
Data is the new soil, and in this fertile new ground, MIT researchers are planting more than just pixels. By using synthetic images to train machine learning models, a team of scientists recently surpassed results obtained from traditional “real-image” training methods.
StableRep: The New Approach
At the core of the approach is a system called StableRep, which doesn’t just use any synthetic images; it generates them through ultra-popular text-to-image models like Stable Diffusion. It’s like creating worlds with words.
As the dust still settles on OpenAI’s latest drama, a letter has surfaced from several staff researchers citing concerns about an AI superintelligence model under development that could potentially pose a threat to humanity, according to those close to the source. The previously undisclosed letter is understood to be the real reason behind why Sam Altman was fired from the company.
The model, known internally as Project Q*, could represent a major breakthrough in the company’s pursuit of artificial general intelligence (AGI) – a highly autonomous branch of AI superintelligence capable of cumulative learning and outperforming humans in most tasks. And you were worried about ChatGPT taking all our jobs?
With Sam Altman now firmly back at the company and a new OpenAI board in place, here are all of the details of Project Q*, as well as the potential implications of AGI in the bigger picture.
On Tuesday, Stability AI released Stable Video Diffusion. The first implementation for private users is now available.
The makers of the Stable Diffusion tool “ComfyUI” have added support for Stable AI’s Stable Video Diffusion models in a new update. ComfyUI is a graphical user interface for Stable Diffusion, using a graph/node interface that allows users to build complex workflows. It is an alternative to other interfaces such as AUTOMATIC1111.
According to the developers, the update can be used to create videos at 1,024 × 576 resolution with a length of 25 frames on the 7-year-old Nvidia GTX 1,080 with 8 gigabytes of VRAM. AMD users can also use the generative video AI with ComfyUI on an AMD 6,800 XT running ROCm on Linux. It takes about 3 minutes to create a video.
The Australian Space Agency has revealed its shortlist of names for the country’s first lunar rover — and you can help choose the winner by casting your vote.
In partnership with NASA, the agency’s Australian-made, semi-autonomous rover is slated to launch to the moon as part of a future Artemis mission by as early as 2026. The rover will have the ability to pick up lunar rocks and dust, then bring the specimens back to a moon lander operated by NASA.
Why it matters: While AI algorithms are seemingly everywhere, processing on the most popular platforms require powerful server GPUs to provide customers with their generative services. Arm is introducing a new dedicated chip design, set to provide AI acceleration even in the most affordable IoT devices starting next year.
The Arm Cortex-M52 is the smallest and most cost-efficient processor designed for AI acceleration applications, according to the company. This latest design from the UK-based fabless firm promises to deliver “enhanced” AI capabilities to Internet of Things (IoT) devices, as Arm states, without the need for a separate computing unit.
Paul Williamson, Arm’s SVP and general manager for the company’s IoT business, emphasized the need to bring machine learning optimized processing to “even the smallest and lowest-power” endpoint devices to fully realize the potential of AI in IoT. Despite AI’s ubiquity, Williamson noted, harnessing the “intelligence” from the vast amounts of data flowing through digital devices requires IoT appliances that are smarter and more capable.
Vernor Vinge is one of the foremost thinkers about the future of artificial intelligence and the potential for a technological singularity to occur in the coming decades. He’s a science fiction writer who’s had a profound impact on a wide range of authors including: William Gibson, Charles Stross, Neal Stephenson and Dan Simmons.
Many of Vinge’s works are brilliant. Among them are some of my all-time favorites in the SF genre. And he’s been recognized with numerous awards, including seven Hugo nominations and five wins, despite writing only eight novels and 24 short stories and novellas over a span of five decades.
In this video, I discuss his early works from the 1960s to the 1980s. His later works from the 1980s onward are the subject of my next video.
0:42 What is a technological singularity? 4:51 A.I. in science fiction history. 5:38 Should we be afraid? 6:59 Who is Vernor VInge? 8:50 Short stories. 11:26 Tatja Grimm’s World (1969, 1987) 13:47 The Witling (1976) 17:40 True Names (1981)