Toggle light / dark theme

As companies race to develop more products powered by artificial intelligence, Microsoft president Brad Smith has issued a stark warning about deep fakes. Deep fakes use a form of AI to generate completely new video or audio, with the end goal of portraying something that did not actually occur in reality. But as AI quickly gets better at mimicking reality, big questions remain over how to regulate it. In short, Mr Smith said, “we must always ensure that AI remains under human control”.

Follow us:
CNA: https://cna.asia.
CNA Lifestyle: http://www.cnalifestyle.com.
Facebook: https://www.facebook.com/channelnewsasia.
Instagram: https://www.instagram.com/channelnewsasia.
Twitter: https://www.twitter.com/channelnewsasia.
TikTok: https://www.tiktok.com/@channelnewsasia

A man paralyzed by a cycling accident is able to walk again after an experimental operation by neuroscientists and surgeons in Switzerland.

Subscribe for more Breaking News: http://smarturl.it/AssociatedPress.
Website: https://apnews.com.
Twitter: https://twitter.com/AP
Facebook: https://facebook.com/APNews.
Instagram: https://www.instagram.com/APNews/

You can license this story through AP Archive: http://www.aparchive.com/metadata/youtube/bbdbedd60e6942069bbbacd20dcb4f02

Russian cybercrime group TA505 has been observed using new hVNC (Hidden Virtual Network Computing) malware in recent attacks, threat intelligence company Elastic reports.

Called Lobshot, the malware allows attackers to bypass fraud detection engines and provides them with stealthy, direct access to the infected machines.

The threat actor distributes the malware through malvertising, abusing Google Ads and a network of fake websites to trick users into downloading legitimate-looking installers containing backdoors.

With AI chatbots’ propensity for making things up and spewing bigoted garbage, one firm founded by ex-OpenAI researchers has a different approach — teaching AI to have a conscience.

As Wired reports, the OpenAI competitor Anthropic’s intriguing chatbot Claude is built with what its makers call a “constitution,” or set of rules that draws from the Universal Declaration of Human Rights and elsewhere to ensure that the bot is not only powerful, but ethical as well.

Jared Kaplan, a former OpenAI research consultant who went on to found Anthropic with a group of his former coworkers, told Wired that Claude is, in essence, learning right from wrong because its training protocols are “basically reinforcing the behaviors that are more in accord with the constitution, and discourages behaviors that are problematic.”

Summary: A novel study uncovers a peculiar pattern of decision-making in mice, influenced by a specific gene named Arc.

While searching for food, mice repeatedly visited an empty location instead of staying at a site abundant in food. However, mice lacking the Arc gene demonstrated a more practical approach, sticking with the food-rich site, thereby consuming more calories overall.

This unique research potentially opens the door for a new field, ‘decision genetics’, investigating the genetic influence on decision-making, possibly even in humans.