Malware campaign via SourceForge and fake AI sites deploy miner, clipper, and RAT malware, impacting 4,604 users in Russia.

A standard digital camera used in a car for stuff like emergency braking has a perceptual latency of a hair above 20 milliseconds. That’s just the time needed for a camera to transform the photons hitting its aperture into electrical charges using either CMOS or CCD sensors. It doesn’t count the further milliseconds needed to send that information to an onboard computer or process it there.
A team of MIT researchers figured that if you had a chip that could process photons directly, you could skip the entire digitization step and perform calculations with the photons themselves, which has the potential to be mind-bogglingly faster.
“We’re focused on a very specific metric here, which is latency. We aim for applications where what matters the most is how fast you can produce a solution. That’s why we are interested in systems where we’re able to do all the computations optically,” says Saumil Bandyopadhyay, an MIT researcher. The team implemented a complete deep neural network on a photonic chip, achieving a latency of 410 picoseconds. To put that in perspective, Bandyopadhyay’s chip could process the entire neural net it had onboard around 58 times within a single tick of the 4 GHz clock on a standard CPU.
Instead of sensing photons and processing the results, why not process the photons?
Westwood Robotics unveils THEMIS V2, a highly agile humanoid robot with enhanced arms and hands for advanced movement, control, and precision.
On August 21, 2023, my 70th birthday, I, Rev. Ivan Stang, used RunwayML and Wombo Dream on my phone to make this A.I. video for the classic song \.
That’s because the candidate, whom the firm has since dubbed “Ivan X,” was a scammer using deepfake software and other generative AI tools in a bid to get hired by the tech company, said Pindrop CEO and co-founder Vijay Balasubramaniyan.
“Gen AI has blurred the line between what it is to be human and what it means to be machine,” Balasubramaniyan said. “What we’re seeing is that individuals are using these fake identities and fake faces and fake voices to secure employment, even sometimes going so far as doing a face swap with another individual who shows up for the job.”
Companies have long fought off attacks from hackers hoping to exploit vulnerabilities in their software, employees or vendors. Now, another threat has emerged: Job candidates who aren’t who they say they are, wielding AI tools to fabricate photo IDs, generate employment histories and provide answers during interviews.
In this second episode of the (A)bsolutely (I)ncredible Podcast, I sit down with Dennis Wilson, Founder of DBC Technologies.
Dennis is deeply involved in my friend Jim Roddy’s Retail Solution Providers Association (RSPA) and is a regular speaker at RSPA events.
Dennis shares the benefits of AI with these providers. He is a passionate marketer who has created a platform that utilizes the best Al capabilities the industry marketplace has to offer.
DBC stands for Doing Business Creatively — utilizing 25 + years of CRM, software, marketing, and sales automation experience.
DBC has been deeply involved and integrating Al into their software and client’s businesses since the launch of OpenAI’s ChatGTP.
If you’re interested in AI Voice solutions for your business let me know and I’ll be glad to connect you with Dennis and his team of experts.
Researchers at Cornell Tech have released a dataset extracted from more than 300,000 public Reddit communities, and a report detailing how Reddit communities are changing their policies to address a surge in AI-generated content.
The team collected metadata and community rules from the online communities, known as subreddits, during two periods in July 2023 and November 2024. The researchers will present a paper with their findings at the Association of Computing Machinery’s CHI conference on Human Factors in Computing Systems being held April 26 to May 1 in Yokohama, Japan.
One of the researchers’ most striking discoveries is the rapid increase in subreddits with rules governing AI use. According to the research, the number of subreddits with AI rules more than doubled in 16 months, from July 2023 to November 2024.
UTSA researchers recently completed one of the most comprehensive studies to date on the risks of using AI models to develop software. In a new paper, they demonstrate how a specific type of error could pose a serious threat to programmers that use AI to help write code.
Joe Spracklen, a UTSA doctoral student in computer science, led the study on how large language models (LLMs) frequently generate insecure code.
His team’s paper, published on the arXiv preprint server, has also been accepted for publication at the USENIX Security Symposium 2025, a cybersecurity and privacy conference.
What makes people think an AI system is creative? New research shows that it depends on how much they see of the creative act. The findings have implications for how we research and design creative AI systems, and they also raise fundamental questions about how we perceive creativity in other people.
The work is published in the journal ACM Transactions on Human-Robot Interaction.
“AI is playing an increasingly large role in creative practice. Whether that means we should call it creative or not is a different question,” says Niki Pennanen, the study’s lead author. Pennanen is researching AI systems at Aalto University and has a background in psychology. Together with other researchers at Aalto and the University of Helsinki, he did experiments to find out whether people think a robot is more creative if they see more of the creative act.
RIVR and Evri debut wheeled-legged delivery robots in the UK, aiming to transform last-mile logistics with smart, scalable automation.