Toggle light / dark theme

Windows, Ubuntu, and VMWare Workstation hacked on last day of Pwn2Own

On the third day of the Pwn2Own hacking contest, security researchers were awarded $185,000 after demonstrating 5 zero-day exploits targeting Windows 11, Ubuntu Desktop, and the VMware Workstation virtualization software.

The highlight of the day was the Ubuntu Desktop operating system getting hacked three times by three different teams, although one of them was a collision with the exploit being previously known.

The three working Ubuntu zero-day were demoed by Kyle Zeng of ASU SEFCOM (a double free bug), Mingi Cho of Theori (a Use-After-Free vulnerability), and Bien Pham (@bienpnn) of Qrious Security.

Gmail and Outlook users given ‘red alert’ over scary AI ‘hiding in your inbox’

A nefarious use for AI. Phishing emails.


SECURITY experts have issued a warning over dangerous phishing emails that are put together by artificial intelligence.

The scams are convincing and help cybercriminals connect with victims before they attack, according to security site CSO.

The AI phishing emails are said to be more convincing than the human versions because they don’t contain some usual telltale scam signs.

Google AI And Microsoft ChatGPT Are Not Our Biggest Security Risks, Warns Chess Legend Kasparov

Amid a flurry of Google and Microsoft generative AI releases last week during SXSW, Garry Kasparov, who is a chess grandmaster, Avast Security Ambassador and Chairman of the Human Rights Foundation, told me he is less concerned about ChatGPT hacking into home appliances than he is about users being duped by bad actors.

“People still have the monopoly on evil,” he warned, standing firm on thoughts he shared with me in 2019. Widely considered one of the greatest chess players of all time, Kasparov gained mythic status in the 1990s as world champion when he beat, and then was defeated by IBM’s Deep Blue supercomputer.


Despite the rapid advancement of generative AI, chess legend Garry Kasparov, now ambassador for the security firm Avast, explains why he doesn’t fear ChatGPT creating a virus to take down the Internet, but shares Gen’s CTO concerns that text-to-video deepfakes could warp our reality.

OpenAI CEO cautions AI like ChatGPT could cause disinformation, cyber-attacks

Society has a limited amount of time “to figure out how to react” and “regulate” AI, says Sam Altman.

OpenAI CEO Sam Altman has cautioned that his company’s artificial intelligence technology, ChatGPT, poses serious risks as it reshapes society.

He emphasized that regulators and society must be involved with the technology, according to an interview telecasted by ABC News on Thursday night.


Interesting Engineering is a cutting edge, leading community designed for all lovers of engineering, technology and science.

Cryptojacking Group TeamTNT Suspected of Using Decoy Miner to Conceal Data Exfiltration

The cryptojacking group known as TeamTNT is suspected to be behind a previously undiscovered strain of malware used to mine Monero cryptocurrency on compromised systems.

That’s according to Cado Security, which found the sample after Sysdig detailed a sophisticated attack known as SCARLETEEL aimed at containerized environments to ultimately steal proprietary data and software.

Specifically, the early phase of the attack chain involved the use of a cryptocurrency miner, which the cloud security firm suspected was deployed as a decoy to conceal the detection of data exfiltration.

Researchers From Stanford And DeepMind Come Up With The Idea of Using Large Language Models LLMs as a Proxy Reward Function

With the development of computing and data, autonomous agents are gaining power. The need for humans to have some say over the policies learned by agents and to check that they align with their goals becomes all the more apparent in light of this.

Currently, users either 1) create reward functions for desired actions or 2) provide extensive labeled data. Both strategies present difficulties and are unlikely to be implemented in practice. Agents are vulnerable to reward hacking, making it challenging to design reward functions that strike a balance between competing goals. Yet, a reward function can be learned from annotated examples. However, enormous amounts of labeled data are needed to capture the subtleties of individual users’ tastes and objectives, which has proven expensive. Furthermore, reward functions must be redesigned, or the dataset should be re-collected for a new user population with different goals.

New research by Stanford University and DeepMind aims to design a system that makes it simpler for users to share their preferences, with an interface that is more natural than writing a reward function and a cost-effective approach to define those preferences using only a few instances. Their work uses large language models (LLMs) that have been trained on massive amounts of text data from the internet and have proven adept at learning in context with no or very few training examples. According to the researchers, LLMs are excellent contextual learners because they have been trained on a large enough dataset to incorporate important commonsense priors about human behavior.

Stealthy UEFI malware bypassing Secure Boot enabled by unpatchable Windows flaw

Researchers on Wednesday announced a major cybersecurity find—the world’s first-known instance of real-world malware that can hijack a computer’s boot process even when Secure Boot and other advanced protections are enabled and running on fully updated versions of Windows.

Dubbed BlackLotus, the malware is what’s known as a UEFI bootkit. These sophisticated pieces of malware target the UEFI—short for Unified Extensible Firmware Interface —the low-level and complex chain of firmware responsible for booting up virtually every modern computer. As the mechanism that bridges a PC’s device firmware with its operating system, the UEFI is an OS in its own right. It’s located in an SPI-connected flash storage chip soldered onto the computer motherboard, making it difficult to inspect or patch. Previously discovered bootkits such as CosmicStrand, MosaicRegressor, and MoonBounce work by targeting the UEFI firmware stored in the flash storage chip. Others, including BlackLotus, target the software stored in the EFI system partition.

Because the UEFI is the first thing to run when a computer is turned on, it influences the OS, security apps, and all other software that follows. These traits make the UEFI the perfect place to launch malware. When successful, UEFI bootkits disable OS security mechanisms and ensure that a computer remains infected with stealthy malware that runs at the kernel mode or user mode, even after the operating system is reinstalled or a hard drive is replaced.