Toggle light / dark theme

I found this on NewsBreak:


Australians are nervous about AI. Efforts are underway to put their minds at ease: advisory committees, consultations and regulations. But these actions have tended to be reactive instead of proactive. We need to imagine potential scenarios before they happen.

Of course, we already do this – in literature.

There is, in fact, more than 100 years’ worth of Australian literature about AI and robotics. Nearly 2,000 such works are listed in the AustLit database, a bibliography of Australian literature that includes novels, screenplays, poetry and other kinds of literature.

Professor Jeongho Kwak’s from the Department of Electrical Engineering and Computer Science at DGIST has developed a learning model and resource optimization technology that combines accuracy and efficiency for 6G vision services. This technology is expected to be utilized to address the high levels of computing power and complex learning models required by 6G vision services.

6G mobile vision services are associated with innovative technologies such as augmented reality (AR) and autonomous driving, which are receiving significant attention in modern society. These services enable quick capturing of videos and images, and efficient understanding of their content through deep learning-based models.

However, this requires high-performance processors (GPUs) and accurate learning models. Previous technologies treated learning models and computing/networking resources as separate entities, failing to optimize performance and mobile device resource utilization.

Self-driving cars do not get drunk, they do not fall asleep, they do not get distracted by text messages, and experts and manufacturers agree they could be the answer to slashing the road toll.

It’s one of the reasons why autonomous vehicles are in the spotlight again, with Tesla promising to unveil a robotaxi in August and Hyundai showing off the results of its driverless car trial in Las Vegas.

But debate is raging in the industry over whether the technology is or will ever be ready to drive in busy, unpredictable environments without any human oversight.

Artificial intelligence (AI) systems, such as the chatbot ChatGPT, have become so advanced that they now very nearly match or exceed human performance in tasks including reading comprehension, image classification and competition-level mathematics, according to a new report (see ‘Speedy advances’). Rapid progress in the development of these systems also means that many common benchmarks and tests for assessing them are quickly becoming obsolete.

These are just a few of the top-line findings from the Artificial Intelligence Index Report 2024, which was published on 15 April by the Institute for Human-Centered Artificial Intelligence at Stanford University in California. The report charts the meteoric progress in machine-learning systems over the past decade.

In particular, the report says, new ways of assessing AI — for example, evaluating their performance on complex tasks, such as abstraction and reasoning — are more and more necessary. “A decade ago, benchmarks would serve the community for 5–10 years” whereas now they often become irrelevant in just a few years, says Nestor Maslej, a social scientist at Stanford and editor-in-chief of the AI Index. “The pace of gain has been startlingly rapid.”

The CEO of Europe’s brightest new AI firm is calling bull on the quest for so-called “artificial general intelligence” (AGI), which he says is akin to the desire to create God.

In an interview with the New York Times, Arthur Mensch, the CEO of the AI firm Mistral, sounded off on his fellow AI executives’ “very religious” obsession with building AGI.

“The whole AGI rhetoric is about creating God,” the Mistral CEO told the newspaper. “I don’t believe in God. I’m a strong atheist. So I don’t believe in AGI.”

With standard deep learning structures, errors tend to accumulate over layers and time. This setup nips problems that come from sequential processing in the bud. When faced with a problem, Taichi distributes the workload across multiple independent clusters, making it easier to tackle larger problems with minimal errors.

The strategy paid off.

Taichi has the computational capacity of 4,256 total artificial neurons, with nearly 14 million parameters mimicking the brain connections that encode learning and memory. When sorting images into 1,000 categories, the photonic chip was nearly 92 percent accurate, comparable to “currently popular electronic neural networks,” wrote the team.

The streaming audio giant’s suite of recommendation tools has grown over the years: Spotify Home feed, Discover Weekly, Blend, Daylist, and Made for You Mixes. And in recent years, there have been signs that it is working. According to data released by Spotify at its 2022 Investor Day, artist discoveries every month on Spotify had reached 22 billion, up from 10 billion in 2018, “and we’re nowhere near done,” the company stated at that time.

Over the past decade or more, Spotify has been investing in AI and, in particular, in machine learning. Its recently launched AI DJ may be its biggest bet yet that technology will allow subscribers to better personalize listening sessions and discover new music. The AI DJ mimics the vibe of radio by announcing the names of songs and lead-in to tracks, something aimed in part to help ease listeners into extending out of their comfort zones. An existing pain point for AI algorithms — which can be excellent at giving listeners what it knows they already like — is anticipating when you want to break out of that comfort zone.