Toggle light / dark theme

As part of its announcement at the Aspen Cyber Summit in New York City today, Google also said that in 2024 it will give 100,000 of the new Titan keys to high-risk individuals around the world. The effort is part of Google’s Advanced Protection Program, which offers vulnerable users expanded account monitoring and threat protection. The company has given away Titan keys through the program in the past, and today it cited the rise of phishing attacks and upcoming global elections as two examples of the need to continue expanding the use of secure authentication methods like passkeys.

Hardware authentication tokens have unique protective benefits because they are siloed, stand-alone devices. But they still need to be rigorously secured to ensure they don’t introduce a different point of weakness. And as with any product, they can have vulnerabilities. In 2019, for example, Google recalled and replaced its Titan BLE-branded security key because of a flaw in its Bluetooth implementation.

When it comes to the new Titan generation, Google tells WIRED that, as with all of its products, it conducted an extensive internal security review on the devices and it also contracted with two external auditors, NCC Group and Ninja Labs, to conduct independent assessments of the new key.

The addition of an additional step in a long-established workflow can help reduce substantial costs show cybersecurity researchers.


Sakkmesterke/iStock.

The increasing use of cloud storage has increased the risks to data security, and cybersecurity researchers have been looking at distributed cloud storage as a plausible solution to this problem.

Only 2% of Alzheimer’s is 100% genetic. The rest is up to your daily habits.

Up Next ► 4 ways to hack your memory https://youtu.be/SCsztDMGP7o.

People want a perfect memory. They wish that they can remember everything that they want to remember. But it doesn’t work like that.

Most people over the age of 50 think that forgetting someone’s name or forgetting why they went into the kitchen is a sign of Alzheimer’s. It isn’t. Most of our forgetfulness is perfectly normal.

If you are worried about developing Alzheimer’s or another form of dementia, some simple lifestyle modifications can help prevent it: getting enough sleep, exercising, eating a balanced diet, and managing stress.

Read the video transcript ► https://bigthink.com/videos/cognitive-decline/

The North Korean-backed BlueNorOff threat group targets Apple customers with new macOS malware tracked as ObjCShellz that can open remote shells on compromised devices.

BlueNorOff is a financially motivated hacking group known for attacking cryptocurrency exchanges and financial organizations such as venture capital firms and banks worldwide.

The malicious payload observed by Jamf malware analysts (labeled ProcessRequest) communicates with the swissborg[.]blog, an attacker-controlled domain registered on May 31 and hosted at 104.168.214[.]151 (an IP address part of BlueNorOff infrastructure).

OpenAI has confirmed that a distributed denial-of-service (DDoS) attack is behind “periodic outages” affecting ChatGPT and its developer tools.

ChatGPT, OpenAI’s AI-powered chatbot, has been experiencing sporadic outages for the past 24 hours. Users who attempted to access the service have been greeted with a message stating that “ChatGPT is at capacity right now,” and others, including TechCrunch, have been unable to log into the service.

OpenAI CEO Sam Altman initially blamed the issue on interest in the platform’s new features, unveiled at the company’s first developer conference on Monday, “far outpacing our expectations.” OpenAI said the issue was fixed at approximately 1 p.m. PST on November 8.

The development arrives days after Elastic Security Labs disclosed the Lazarus Group’s use of a new macOS malware called KANDYKORN to target blockchain engineers.

Also linked to the threat actor is a macOS malware referred to as RustBucket, an AppleScript-based backdoor that’s designed to retrieve a second-stage payload from an attacker-controlled server.

In these attacks, prospective targets are lured under the pretext of offering them investment advice or a job, only to kick-start the infection chain by means of a decoy document.

Another good use for AI. Fighting disinformation.


About 60% of adults in the US who get their news through social media have, largely unknowingly, shared false information, according to a poll by the Pew Research Center. The ease at which disinformation is spread and the severity of consequences it brings — from election hacking to character assassination — make it an issue of grave concern for us all.

One of the best ways to combat the spread of fake news on the internet is to understand where the false information was started and how it was disseminated. And that’s exactly what Camille Francois, the chief innovation officer at Graphika, is doing. She’s dedicated to bringing to light disinformation campaigns before they take hold.

Francois and her team are employing machine learning to map out online communities and better understand how information flows through networks. It’s a bold and necessary crusade as troll farms, deep fakes, and false information bombard the typical internet user every single day.

Francois says, this work is two parts technology, one part sociology. The techniques are always evolving, and we have to stay one step ahead.” We sit down with Francois for an in-depth discussion on how the tech works and what it means for the dissemination of information across the internet.

‘Prompt injection’ attacks haven’t caused giant problems yet. But it’s a matter of time, researchers say.

Imagine a chatbot is applying for a job as your personal assistant. The pros: This chatbot is powered by a cutting-edge large language model. It can write your emails, search your files, summarize websites and converse with you.

The con: It will take orders from absolutely anyone.

AI chatbots are good at many things, but they struggle to tell the difference between legitimate commands from their users and manipulative commands from outsiders. It’s an AI Achilles’ heel, cybersecurity researchers say, and it’s a matter of time before attackers take advantage of it.


“Prompt injection” is a major risk to large language models and the chatbots they power. Here’s how the attack works, examples and potential fallout.