Toggle light / dark theme

This post is also available in: he עברית (Hebrew)

How many times have you wished you could play back your dream on your computer or phone? With this new discovery, the technology might be closer than you think.

In a research published last week on the arXiv server, researchers at the National University of Singapore and the Chinese University of Hong Kong reported that they have developed a process capable of generating video from brain scans.

Computer vision specialist Landing AI has a unique calling card: Its co-founder and CEO is a tech rock star.

At Google Brain, Andrew Ng became famous for showing how deep learning could recognize cats in a sea of images with uncanny speed and accuracy. Later, he founded Coursera, where his machine learning courses have attracted nearly 5 million students.

Today, Ng is best known for his views on data-centric AI — that improving AI performance now requires more focus on datasets and less on refining neural network models. It’s a philosophy coded into Landing AI’s flagship product, LandingLens.

The company is also focusing on advertising and its core segment of gaming.

Chipmaker Nvidia has unveiled a slew of artificial intelligence (AI) products in its bid to stay ahead of the game and join the trillion-dollar valuation club with the likes of Apple, Microsoft, and Amazon. The announcement comes close to the market rally of NVIDIA stock, which rose over 25 percent last week.

Once known for making chips for gaming geeks, Nvidia is now at the core of the AI frenzy that has gripped the world after its graphic processing units (GPUs) have been a critical component of the capacities of AI tools. The company’s A100 and H100 chips have become household names after tools like ChatGPT became popular last year.

Discover how researchers have developed an innovative analog computer that utilizes water waves to predict chaotic events.

Have you ever wondered what the future holds? Do you think a computer learns from the past and predicts the future? Most of us would think of advanced AI models when posed with this question, but what if we told you that it could happen in a completely different way?

Picture a tank of water instead of a traditional circuitry processor. As surprising as it may sound, a group of researchers has built just that—a unique analog computer that utilizes water waves to forecast chaotic events.


JuSun/iStock.

How should we respond to the idea that advances in AI pose catastrophic risks for the wellbeing of humanity?

Two sets of arguments have been circulating online for many years, but in light of recent events, are now each mutating into new forms and are attracting much more attention from the public. The first set argues that AI risks are indeed serious. The second set is skeptical. It argues that the risks are exaggerated, or can easily be managed, and are a distraction from more important issues and opportunities.

In this London Futurists webinar, recorded on the 27th of May 2023, we assessed the skeptical views. To guide us, we were joined by the two authors of a recently published article, “AI Risk Skepticism: A Comprehensive Survey”, namely Vemir Ambartsoumean and Roman Yampolskiy. We were also joined by Mariana Todorova, a member of the Millennium Project’s AGI scenarios study team.

The meeting was introduced and moderated by David Wood, Chair of London Futurists.

Join top executives in San Francisco on July 11–12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

Last year generative artificial intelligence (AI) took the world by storm as advancements populated news and social media. Investors were swarming the space as many recognized its potential across industries. According to IDC, there is already a 26.9% increase in global AI spending compared to 2022. And this number is forecast to exceed $300 billion in 2026.

It’s also caused a shift in how people view AI. Before, people thought of artificial intelligence as an academic, high-tech pursuit. It used to be that the most talked-about example of AI was autonomous vehicles. But even with all the buzz, it had yet to be a widely available and applied form of consumer-grade AI.

As companies race to develop more products powered by artificial intelligence, Microsoft president Brad Smith has issued a stark warning about deep fakes. Deep fakes use a form of AI to generate completely new video or audio, with the end goal of portraying something that did not actually occur in reality. But as AI quickly gets better at mimicking reality, big questions remain over how to regulate it. In short, Mr Smith said, “we must always ensure that AI remains under human control”.

Follow us:
CNA: https://cna.asia.
CNA Lifestyle: http://www.cnalifestyle.com.
Facebook: https://www.facebook.com/channelnewsasia.
Instagram: https://www.instagram.com/channelnewsasia.
Twitter: https://www.twitter.com/channelnewsasia.
TikTok: https://www.tiktok.com/@channelnewsasia