Toggle light / dark theme

New CrystalRAT malware adds RAT, stealer and prankware features

A new malware-as-a-service called CrystalRAT is being promoted on Telegram, offering remote access, data theft, keylogging, and clipboard hijacking capabilities.

The malware emerged in January with a tiered subscription model. Apart from the Telegram channel, the MaaS was also promoted on YouTube via a dedicated marketing channel that showcased its capabilities.

Kaspersky researchers say in a report today that the malware features strong similarities to WebRAT (Salat Stealer), including the same panel design, Go-based code, and a similar bot-based sales system.

Advancing the Manufacture of Patient Accessible Cell and Gene Therapies at Place-of-Care

A partnership involving a medical school, a non-profit organization, and a biotech company have formed a partnership for the development and manufacture of an accessible and commercially viable hematopoietic stem cell (HSC) manufacturing platform for diseases like sickle cell disease (SCD). The alliance combines Trenchant BioSystems’ technology for automating patient-specific cell and gene therapy (CGT) processes, the University of Massachusetts Chan Medical School’s expertise on blood stem cell processes, and Caring Cross’s expertise in increasing patient access.

The collaboration will focus on developing a gene-modified stem cell manufacturing process with Trenchant’s AutoCell automated CGT manufacturing platform that is designed to be scalable and operate at place-of-care in an ISO class 7 environment to increase efficiencies and decrease costs.

A key reason Trenchant BioSystems’ automated CGT manufacturing platform was selected is its use of a microbubble separation approach as an alternative to immunomagnetic bead-based separation for stem cell gene therapies, point out officials at Caring Cross and Chan Medical School. In addition, AutoCell has a small footprint and significantly fewer facility requirements, important factors for lowering the cost of these therapies, adds Jon Ellis, CEO, Trenchant BioSystems.

AI search robot uses 3D maps and internet knowledge to find lost items

A robot that can locate lost items on command, the latest development at the Technical University of Munich (TUM), combines knowledge from the internet with a spatial map of its surroundings to efficiently find the objects being sought. The new robot from Prof. Angela Schoellig’s TUM Learning Systems and Robotics Lab looks like a broomstick on wheels with a camera mounted at the top. It is one of the first robots that not only integrates image understanding but also applies it to a clearly defined task.

To find a pair of glasses misplaced in the kitchen, for example, the robot has to look around and build a three-dimensional image of the room. The camera initially provides two-dimensional images, but these pixels also contain depth information. This creates a spatial map of the environment that is accurate to the centimeter and is constantly updated. A laptop also provides the robot with information about which objects are visible in the image and what significance they have for humans.

“We have taught the robot to understand its surroundings,” says Prof. Schoellig. The head of the Robotics Lab at the TUM Chair of Safety, Performance and Reliability for Learning Systems aims to develop robots that can navigate any environment independently. Humanoid robots working in factories or robots in care settings in private homes require this newly developed basic understanding, which, as Schoellig explains, “is important for all robots that move in spaces that are constantly changing.” A paper introducing the technology is published in the journal IEEE Robotics and Automation Letters.

Rethinking brain-like artificial intelligence: New study reveals hidden mismatches

A new study by York University researchers has found a potential striking flaw in artificial intelligence (AI) models. Artificial neural networks (ANNs), a type of AI model built to solve vision tasks for computers, have surprisingly emerged as the current best understanding of how our own brain’s visual system works, in the last decade. But does current AI really work like a primate brain?

“Artificial intelligence systems are often described as ‘brain-like’ because they can predict activity in parts of the brain that help us recognize objects,” says York University Assistant Professor Kohitij Kar, senior author of a new study. “Until now, scientists mostly tested this in one direction. They asked whether AI models can predict brain activity.”

In this study, the researchers flipped the question—if AI truly mirrors the brain, shouldn’t brain activity also be able to predict what’s happening inside the AI model?—and developed a reverse predictivity test to find the answer. The findings are published in the journal Nature Machine Intelligence.

/* */