Toggle light / dark theme

According to Microsoft, 1,287 password attacks occur every second around the world.

Microsoft is now focusing on cybersecurity as part of its ongoing efforts to incorporate generative artificial intelligence into the majority of its products. The company previously announced an AI-powered assistant for Office apps.

To enhance cyber security, Microsoft Corp has announced the implementation of the next generation of AI in its security products.


Lcva2/iStock.

Hackers modified an enterprise communication company’s installation software in an attack that could steal credentials and other information from companies around the world, according to an analysis published Wednesday.

Researchers with cybersecurity firm SentinelOne’s SentinelLabs team traced illicit activity flagged by its detection systems back to the installation software from a company called 3CX, which according to its website provides video conferencing and online communication products to companies such as Toyota, McDonalds, Pepsi and Chevron. In total, the company says it serves some 12 million customers globally.

This sort of large-scale attack that takes advantage of a company’s supply chain — similar to how attackers leveraged a flaw within a SolarWinds product update to install backdoors inside its customers’ networks — can be difficult to defend against and could lead to devastating consequences for victims. It’s also the kind of operation that is typically associated with a nation-state hacking group.

Cybersecurity researchers have discovered a fundamental security flaw in the design of the IEEE 802.11 WiFi protocol standard, allowing attackers to trick access points into leaking network frames in plaintext form.

WiFi frames are data containers consisting of a header, data payload, and trailer, which include information such as the source and destination MAC address, control, and management data.

These frames are ordered in queues and transmitted in a controlled matter to avoid collisions and to maximize data exchange performance by monitoring the busy/idle states of the receiving points.

One more. I hope it’s not posted yet. Even AI isn’t safe.


ChatGPT creator OpenAI has confirmed a data breach caused by a bug in an open source library, just as a cybersecurity firm noticed that a recently introduced component is affected by an actively exploited vulnerability.

OpenAI said on Friday that it had taken the chatbot offline earlier in the week while it worked with the maintainers of the Redis data platform to patch a flaw that resulted in the exposure of user information.

The issue was related to ChatGPT’s use of Redis-py, an open source Redis client library, and it was introduced by a change made by OpenAI on March 20.

The Near–Ultrasound Invisible Trojan, or NUIT, was developed by a team of researchers from the University of Texas at San Antonio and the University of Colorado Colorado Springs as a technique to secretly convey harmful orders to voice assistants on smartphones and smart speakers.

If you watch videos on YouTube on your smart TV, then that television must have a speaker, right? According to Guinevere Chen, associate professor and co-author of the NUIT article, “the sound of NUIT harmful orders will [be] inaudible, and it may attack your mobile phone as well as connect with your Google Assistant or Alexa devices.” “That may also happen in Zooms during meetings. During the meeting, if someone were to unmute themself, they would be able to implant the attack signal that would allow them to hack your phone, which was placed next to your computer.

The attack works by playing sounds close to but not exactly at ultrasonic frequencies, so they may still be replayed by off-the-shelf hardware, using a speaker, either the one already built into the target device or anything nearby. If the first malicious instruction is to mute the device’s answers, then subsequent actions, such as opening a door or disabling an alarm system, may be initiated without warning if the first command was to silence the device in the first place.

On the third day of the Pwn2Own hacking contest, security researchers were awarded $185,000 after demonstrating 5 zero-day exploits targeting Windows 11, Ubuntu Desktop, and the VMware Workstation virtualization software.

The highlight of the day was the Ubuntu Desktop operating system getting hacked three times by three different teams, although one of them was a collision with the exploit being previously known.

The three working Ubuntu zero-day were demoed by Kyle Zeng of ASU SEFCOM (a double free bug), Mingi Cho of Theori (a Use-After-Free vulnerability), and Bien Pham (@bienpnn) of Qrious Security.

A nefarious use for AI. Phishing emails.


SECURITY experts have issued a warning over dangerous phishing emails that are put together by artificial intelligence.

The scams are convincing and help cybercriminals connect with victims before they attack, according to security site CSO.

The AI phishing emails are said to be more convincing than the human versions because they don’t contain some usual telltale scam signs.

Amid a flurry of Google and Microsoft generative AI releases last week during SXSW, Garry Kasparov, who is a chess grandmaster, Avast Security Ambassador and Chairman of the Human Rights Foundation, told me he is less concerned about ChatGPT hacking into home appliances than he is about users being duped by bad actors.

“People still have the monopoly on evil,” he warned, standing firm on thoughts he shared with me in 2019. Widely considered one of the greatest chess players of all time, Kasparov gained mythic status in the 1990s as world champion when he beat, and then was defeated by IBM’s Deep Blue supercomputer.


Despite the rapid advancement of generative AI, chess legend Garry Kasparov, now ambassador for the security firm Avast, explains why he doesn’t fear ChatGPT creating a virus to take down the Internet, but shares Gen’s CTO concerns that text-to-video deepfakes could warp our reality.