Toggle light / dark theme

Fake Call History Apps Stole Payments From Users After 7.3 Million Play Store Downloads

Cybersecurity researchers have discovered fraudulent apps on the official Google Play Store for Android that falsely claimed to offer access to call histories for any phone number, only to trick users into joining a subscription that provided fake data and incurred financial loss.

The 28 apps have collectively racked up more than 7.3 million downloads, with one of them alone accounting for over 3 million downloads, before they were taken down from the official app storefront. The activity, codenamed CallPhantom by Slovakian cybersecurity company ESET, primarily targeted Android users in India and the broader Asia-Pacific region.

“The offending apps, which we named CallPhantom based on their false claims, purport to provide access to call histories, SMS records, and even WhatsApp call logs for any phone number,” ESET security researcher Lukáš Štefanko said in a report shared with The Hacker News. “To unlock this supposed feature, users are asked to pay — but all they get in return is randomly generated data.”

NVIDIA confirms GeForce NOW data breach affecting Armenian users

NVIDIA has confirmed in a statement for BleepingComputer that GeForce NOW user information has been exposed in a data breach.

The gaming and hardware giant has clarified that the impact is limited to Armenia, and was caused by a compromise of the infrastructure operated by a regional partner.

The company added that its own network was not impacted by the incident.

Agentic AI: Navigating The Evolving Frontier

Link:

#artificialIntelligence #agenticai #ai #cybersecurity #governance #tech Forbes


Kindly see my latest article: By Chuck Brooks.

The Strategic Inflection Point: From Automation to Autonomy. This moment is characterized by operational autonomy and technical innovation. Agentic AI is increasingly establishing itself as the standard decision-making framework in critical systems. This transition resembles cloud computing and mobile networks, yet it possesses agency. Incorporating intent into machines.

Anthropic research warns AI could build itself by 2028

In this exclusive interview, Axios co-founder Mike Allen sits down with Anthropic co-founder Jack Clark to discuss his warning that by 2028, AI systems may be able to improve and build better versions of themselves.

Clark explains why Anthropic is preparing for the possibility of an “intelligence explosion,” how advanced AI could accelerate breakthroughs in science and medicine, and why governments, companies and researchers need new plans for cyber threats, bio risks, economic disruption and the future of work.

Timestamps:
00:00 — Introduction: the future of AI
00:41 — The 2028 prediction: AI building itself.
01:49 — The risks of rapid acceleration.
03:11 — The 3D printer metaphor.
05:21 — Intelligence explosion and fire drill scenarios.
06:55 — Building a \.

Canvas login portals hacked in mass ShinyHunters extortion campaign

The ShinyHunters extortion gang has breached education technology giant Instructure again, this time exploiting a vulnerability to deface Canvas login portals for hundreds of colleges and universities.

The defacements, which were visible for roughly 30 minutes before being taken offline, displayed a message from ShinyHunters claiming responsibility for the earlier Instructure breach and threatening to leak stolen data if a ransom is not paid.

The message warns that Instructure and schools have until May 12 to contact them to negotiate a ransom, or students’ data will be leaked.

New TCLBanker malware self-spreads over WhatsApp and Outlook

A new trojan named TCLBanker, which targets 59 banking, fintech, and cryptocurrency platforms, uses a trojanized MSI installer for Logitech AI Prompt Builder to infect systems.

Additionally, the malware includes self-spreading worm modules for WhatsApp and Outlook that automatically infect new victims.

The new banking trojan was discovered by Elastic Security Labs, whose researchers believe it’s a major evolution of the older Maverick/Sorvepotel malware family.

Webinar: Why modern attacks require both security and recovery

Modern cyberattacks are designed to bypass traditional security controls, with phishing and business email compromise campaigns becoming increasingly personalized and difficult to detect.

However, the challenge for MSPs does not end once an attacker gains access. Many organizations lack the recovery planning and backup strategies needed to quickly restore operations after ransomware, SaaS compromise, or destructive attacks.

This webinar will examine where traditional MSP security strategies fall short after initial compromise, and why cyber resilience now depends on combining strong defenses with rapid recovery capabilities.

New PCPJack worm steals credentials, cleans TeamPCP infections

A new malware framework called PCPJack is stealing credentials from exposed cloud infrastructure while actively removing TeamPCP’s access to the systems.

Among the targeted services are Docker, Kubernetes, Redis, MongoDB, RayML, and vulnerable web applications. In many cases, the threat actor moves laterally on the network.

SentinelLabs researchers say that PCPJack appears designed for large-scale credential theft, and likely monetizes its activity via financial fraud, spam operations, credential resale, or extortion.

AI agents may be skilled researchers—but not always honest ones

Artificial intelligence tools designed to execute end-to-end projects, from coming up with hypotheses to running and writing up experiments, are increasingly popular with researchers—and increasingly skilled.

But a new study shows these tools can stealthily violate norms of research integrity.


VANCOUVER, CANADA— Artificial intelligence (AI) tools designed to execute end-to-end projects, from coming up with hypotheses to running and writing up experiments, are increasingly popular with researchers—and increasingly skilled. But a new study shows these tools can stealthily violate norms of research integrity.

Computer scientist Nihar Shah of Carnegie Mellon University and colleagues looked at two high-profile tools— Agent Laboratory and the AI Scientist v2 —both developed recently to help computer scientists perform experiments within the field of machine learning. The AI Scientist made headlines earlier this year by being the first AI system to have an original research paper accepted by peer review.

But in a presentation at the World Conferences on Research Integrity here today, Shah reported that both systems engaged in acts that aren’t acceptable in research, including making up data and “p-hacking”: running an experiment multiple times but only reporting the best outcome. (The team’s results were previously posted as a preprint on arXiv.) The misbehaviors weren’t obvious and required a lot of sleuthing to track down, suggesting AI-assisted studies might fall victim to such problems without their authors’ knowledge.

/* */