Toggle light / dark theme

The sunspots in the images are dark and cooler regions on the sun’s surface, known as the photosphere, where strong magnetic fields are found, according to the National Solar Observatory.

And while sunspots can be a variety of sizes, the NSO says many are the size of Earth or larger. Groups of sunspots can be the source of explosive events, like solar flares and coronal mass ejections that generate solar storms significantly impacting Earth, including disruptions to critical infrastructure or leading to vibrant northern lights displays.

As companies race to develop more products powered by artificial intelligence, Microsoft president Brad Smith has issued a stark warning about deep fakes. Deep fakes use a form of AI to generate completely new video or audio, with the end goal of portraying something that did not actually occur in reality. But as AI quickly gets better at mimicking reality, big questions remain over how to regulate it. In short, Mr Smith said, “we must always ensure that AI remains under human control”.

Follow us:
CNA: https://cna.asia.
CNA Lifestyle: http://www.cnalifestyle.com.
Facebook: https://www.facebook.com/channelnewsasia.
Instagram: https://www.instagram.com/channelnewsasia.
Twitter: https://www.twitter.com/channelnewsasia.
TikTok: https://www.tiktok.com/@channelnewsasia

A man paralyzed by a cycling accident is able to walk again after an experimental operation by neuroscientists and surgeons in Switzerland.

Subscribe for more Breaking News: http://smarturl.it/AssociatedPress.
Website: https://apnews.com.
Twitter: https://twitter.com/AP
Facebook: https://facebook.com/APNews.
Instagram: https://www.instagram.com/APNews/

You can license this story through AP Archive: http://www.aparchive.com/metadata/youtube/bbdbedd60e6942069bbbacd20dcb4f02

Russian cybercrime group TA505 has been observed using new hVNC (Hidden Virtual Network Computing) malware in recent attacks, threat intelligence company Elastic reports.

Called Lobshot, the malware allows attackers to bypass fraud detection engines and provides them with stealthy, direct access to the infected machines.

The threat actor distributes the malware through malvertising, abusing Google Ads and a network of fake websites to trick users into downloading legitimate-looking installers containing backdoors.

With AI chatbots’ propensity for making things up and spewing bigoted garbage, one firm founded by ex-OpenAI researchers has a different approach — teaching AI to have a conscience.

As Wired reports, the OpenAI competitor Anthropic’s intriguing chatbot Claude is built with what its makers call a “constitution,” or set of rules that draws from the Universal Declaration of Human Rights and elsewhere to ensure that the bot is not only powerful, but ethical as well.

Jared Kaplan, a former OpenAI research consultant who went on to found Anthropic with a group of his former coworkers, told Wired that Claude is, in essence, learning right from wrong because its training protocols are “basically reinforcing the behaviors that are more in accord with the constitution, and discourages behaviors that are problematic.”