Toggle light / dark theme

Nvidia strengthens portfolio to offer more AI products and services

The company is also focusing on advertising and its core segment of gaming.

Chipmaker Nvidia has unveiled a slew of artificial intelligence (AI) products in its bid to stay ahead of the game and join the trillion-dollar valuation club with the likes of Apple, Microsoft, and Amazon. The announcement comes close to the market rally of NVIDIA stock, which rose over 25 percent last week.

Once known for making chips for gaming geeks, Nvidia is now at the core of the AI frenzy that has gripped the world after its graphic processing units (GPUs) have been a critical component of the capacities of AI tools. The company’s A100 and H100 chips have become household names after tools like ChatGPT became popular last year.

Unleashing the power of water: Researchers build analog computer to forecast chaotic futures

Discover how researchers have developed an innovative analog computer that utilizes water waves to predict chaotic events.

Have you ever wondered what the future holds? Do you think a computer learns from the past and predicts the future? Most of us would think of advanced AI models when posed with this question, but what if we told you that it could happen in a completely different way?

Picture a tank of water instead of a traditional circuitry processor. As surprising as it may sound, a group of researchers has built just that—a unique analog computer that utilizes water waves to forecast chaotic events.


JuSun/iStock.

Assessing AI Risk Skepticism

How should we respond to the idea that advances in AI pose catastrophic risks for the wellbeing of humanity?

Two sets of arguments have been circulating online for many years, but in light of recent events, are now each mutating into new forms and are attracting much more attention from the public. The first set argues that AI risks are indeed serious. The second set is skeptical. It argues that the risks are exaggerated, or can easily be managed, and are a distraction from more important issues and opportunities.

In this London Futurists webinar, recorded on the 27th of May 2023, we assessed the skeptical views. To guide us, we were joined by the two authors of a recently published article, “AI Risk Skepticism: A Comprehensive Survey”, namely Vemir Ambartsoumean and Roman Yampolskiy. We were also joined by Mariana Todorova, a member of the Millennium Project’s AGI scenarios study team.

The meeting was introduced and moderated by David Wood, Chair of London Futurists.

For more details about the event and the panellists, see https://www.meetup.com/london-futurists/events/293488808/

The paper “AI Risk Skepticism — A comprehensive study” can be found at https://arxiv.org/abs/2303.03885.

How generative AI can revolutionize customization and user empowerment

Join top executives in San Francisco on July 11–12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

Last year generative artificial intelligence (AI) took the world by storm as advancements populated news and social media. Investors were swarming the space as many recognized its potential across industries. According to IDC, there is already a 26.9% increase in global AI spending compared to 2022. And this number is forecast to exceed $300 billion in 2026.

It’s also caused a shift in how people view AI. Before, people thought of artificial intelligence as an academic, high-tech pursuit. It used to be that the most talked-about example of AI was autonomous vehicles. But even with all the buzz, it had yet to be a widely available and applied form of consumer-grade AI.

Microsoft president says “must always ensure AI remains under human control”

As companies race to develop more products powered by artificial intelligence, Microsoft president Brad Smith has issued a stark warning about deep fakes. Deep fakes use a form of AI to generate completely new video or audio, with the end goal of portraying something that did not actually occur in reality. But as AI quickly gets better at mimicking reality, big questions remain over how to regulate it. In short, Mr Smith said, “we must always ensure that AI remains under human control”.

Follow us:
CNA: https://cna.asia.
CNA Lifestyle: http://www.cnalifestyle.com.
Facebook: https://www.facebook.com/channelnewsasia.
Instagram: https://www.instagram.com/channelnewsasia.
Twitter: https://www.twitter.com/channelnewsasia.
TikTok: https://www.tiktok.com/@channelnewsasia

OpenAI Competitor Says Its Chatbot Has a Rudimentary Conscience

With AI chatbots’ propensity for making things up and spewing bigoted garbage, one firm founded by ex-OpenAI researchers has a different approach — teaching AI to have a conscience.

As Wired reports, the OpenAI competitor Anthropic’s intriguing chatbot Claude is built with what its makers call a “constitution,” or set of rules that draws from the Universal Declaration of Human Rights and elsewhere to ensure that the bot is not only powerful, but ethical as well.

Jared Kaplan, a former OpenAI research consultant who went on to found Anthropic with a group of his former coworkers, told Wired that Claude is, in essence, learning right from wrong because its training protocols are “basically reinforcing the behaviors that are more in accord with the constitution, and discourages behaviors that are problematic.”

Neuralink says it has the FDA’s OK to start clinical trials

In December 2022, founder Elon Musk gave an update on his other, other company, the brain implant startup Neuralink. As early as 2020, the company had been saying it was close to starting clinical trials of the implants, but the December update suggested those were still six months away. This time, it seems that the company was correct, as it now claims that the Food and Drug Administration (FDA) has given its approval for the start of human testing.

Neuralink is not ready to start recruiting test subjects, and there are no details about what the trials will entail. Searching the ClinicalTrials.gov database for “Neuralink” also turns up nothing. Typically, the initial trials are small and focused entirely on safety rather than effectiveness. Given that Neuralink is developing both brain implants and a surgical robot to do the implanting, there will be a lot that needs testing.

It’s likely that these will focus on the implants first, given that other implants have already been tested in humans, whereas an equivalent surgical robot has not.

Researchers develop interactive ‘Stargazer’ camera robot that can help film tutorial videos

A group of computer scientists from the University of Toronto wants to make it easier to film how-to videos.

The team of researchers have developed Stargazer, an interactive robot that helps university instructors and other content creators create engaging tutorial videos demonstrating physical skills.

For those without access to a cameraperson, Stargazer can capture dynamic instructional videos and address the constraints of working with static cameras.