Toggle light / dark theme

Details have emerged about a new critical security flaw impacting PHP that could be exploited to achieve remote code execution under certain circumstances.

The vulnerability, tracked as CVE-2024–4577, has been described as a CGI argument injection vulnerability affecting all versions of PHP installed on the Windows operating system.

According to DEVCORE security researcher, the shortcoming makes it possible to bypass protections put in place for another security flaw, CVE-2012–1823.

A group of Israeli researchers explored the security of the Visual Studio Code marketplace and managed to “infect” over 100 organizations by trojanizing a copy of the popular ‘Dracula Official theme to include risky code. Further research into the VSCode Marketplace found thousands of extensions with millions of installs.

Visual Studio Code (VSCode) is a source code editor published by Microsoft and used by many professional software developers worldwide.

Microsoft also operates an extensions market for the IDE, called the Visual Studio Code Marketplace, which offers add-ons that extend the application’s functionality and provide more customization options.

The disclosure notice also noted several security changes made to the Spaces platform in response to the leak, including the removal of org tokens to improve traceability and auditing capabilities, and the implementation of a key management service (KMS) for Spaces secrets.

Hugging Face said it plans to deprecate traditional read and write tokens “in the near future,” replacing them with fine-grained access tokens, which are currently the default.

Spaces users are recommended to switch their Hugging Face tokens to fine-grained access tokens if they are not already using them, and refresh any key or token that may have been exposed.

Applied in this way, it’s not just generative AI—this is transformational AI. It goes beyond accelerating productivity; it accelerates innovation by sparking new business strategies and revamping existing operations, paving the way for a new era of autonomous enterprise.

Keep in mind that not all Large Language Models (LLMs) can be tailored for genuine business innovation. Most models are generalists that are trained on public information found on the internet and are not experts on your particular brand of doing business. However, techniques like Retrieval Augmented Generation (RAG) allow for the augmentation of general LLMs with industry-specific and company-specific data, enabling them to adapt to anyone’s requirements without extensive and expensive training.

We are still in the nascent stages of advanced AI adoption. Most companies are grappling with the basics—such as implementation, security and governance. However, forward-thinking organizations are already looking ahead. By reimagining the application of generative AI, they are laying the groundwork for businesses to reinvent themselves, ushering in an era where innovation knows no bounds.

Google has accidentally collected childrens’ voice data, leaked the trips and home addresses of car pool users, and made YouTube recommendations based on users’ deleted watch history, among thousands of other employee-reported privacy incidents, according to a copy of an internal Google database which tracks six years worth of potential privacy and security issues obtained by 404 Media.

Individually the incidents, most of which have not been previously publicly reported, may only each impact a relatively small number of people, or were fixed quickly. Taken as a whole, though, the internal database shows how one of the most powerful and important companies in the world manages, and often mismanages, a staggering amount of personal, sensitive data on people’s lives.

The data obtained by 404 Media includes privacy and security issues that Google’s own employees reported internally. These include issues with Google’s own products or data collection practices; vulnerabilities in third party vendors that Google uses; or mistakes made by Google staff, contractors, or other people that have impacted Google systems or data. The incidents include everything from a single errant email containing some PII, through to substantial leaks of data, right up to impending raids on Google offices. When reporting an incident, employees give the incident a priority rating, P0 being the highest, P1 being a step below that. The database contains thousands of reports over the course of six years, from 2013 to 2018.

Summary: ChatGPT Edu, powered by GPT-4o, is designed for universities to responsibly integrate AI into academic and campus operations. This advanced AI tool supports text and vision reasoning, data analysis, and offers enterprise-level security.

Successful applications at institutions like Columbia University and Wharton School highlight its potential. ChatGPT Edu aims to make AI accessible and beneficial across educational settings.

This innovation has the potential to significantly improve AR/VR displays by enabling the projection of more complex and realistic scenes. It also holds promise for applications in image encryption, where the information is encoded into multiple holographic channels for enhanced security.

The research is a significant step forward in developing high-performance metaholograms with a vastly increased information capacity. This study paves the way for exciting new possibilities in various fields, from advanced displays to information encryption and .

Want to go on an unforgettable trip? Abstract Submission closing soon! Exciting news from SnT, Interdisciplinary Centre for Security, Reliability and Trust, University of Luxembourg! We are thrilled to announce the 1st European Interstellar Symposium in collaboration with esteemed partners like the Interstellar Research Group, Initiative & Institute for Interstellar Studies, Breakthrough Prize Foundation, and Luxembourg Space Agency. This interdisciplinary symposium will delve into the profound questions surrounding interstellar travel, exploring topics such as human and robotic exploration, propulsion, exoplanet research, life support systems, and ethics. Join us to discuss how these insights will impact near-term applications on Earth and in space, covering technologies like optical communications, ultra-lightweight materials, and artificial intelligence. Don’t miss this opportunity to connect with a community of experts and enthusiasts, all united in a common goal. Check out the “Call for Papers” link in the comment section to secure your spot! Image credit: Maciej Rębisz, Science Now Studio #interstellar #conference #Luxembourg #exoplanet

How can rapidly emerging #AI develop into a trustworthy, equitable force? Proactive policies and smart governance, says Salesforce.


These initial steps ignited AI policy conversations amid the acceleration of innovation and technological change. Just as personal computing democratized internet access and coding accessibility, fueling more technology creation, AI is the latest catalyst poised to unlock future innovations at an unprecedented pace. But with such powerful capabilities comes large responsibility: We must prioritize policies that allow us to harness its power while protecting against harm. To do so effectively, we must acknowledge and address the differences between enterprise and consumer AI.

Enterprise versus consumer AI

Salesforce has been actively researching and developing AI since 2014, introduced our first AI functionalities into our products in 2016, and established our office of ethical and human use of technology in 2018. Trust is our top value. That’s why our AI offerings are founded on trust, security and ethics. Like many technologies, there’s more than one use for AI. Many people are already familiar with large language models (LLMs) via consumer-facing apps like ChatGPT. Salesforce is leading the development of AI tools for businesses, and our approach differentiates between consumer-grade LLMs and what we classify as enterprise AI.

Threat actors are targeting Check Point Remote Access VPN devices in an ongoing campaign to breach enterprise networks, the company warned in a Monday advisory.

Remote Access is integrated into all Check Point network firewalls. It can be configured as a client-to-site VPN for access to corporate networks via VPN clients or set up as an SSL VPN Portal for web-based access.

Check Point says the attackers are targeting security gateways with old local accounts using insecure password-only authentication, which should be used with certificate authentication to prevent breaches.