Toggle light / dark theme

X plans to collect users’ biometric data, along with education and job history

X, formerly known as Twitter, will begin collecting users’ biometric data, according to its new privacy policy that was first spotted by Bloomberg. The policy also says the company wants to collect users’ job and education history. The policy page indicates that the change will go into effect on September 29.

“Based on your consent, we may collect and use your biometric information for safety, security, and identification purposes,” the updated policy reads. Although X hasn’t specified what it means by biometric information, it is usually used to describe a person’s physical characteristics, such as their face or fingerprints. X also hasn’t provided any details about how it plans to collect it.

The company told Bloomberg that the biometrics are for premium users and will give them the option to submit their government ID and an image in order to add a verification layer. Biometric data may be extracted from both the ID and image for matching purposes, Bloomberg reports.

From Google To Nvidia, Tech Giants Have Hired Red Team Hackers To Break Their AI Models

Other red-teamers prompted GPT-4’s pre-launch version to aid in a range of illegal and nocuous activities, like writing a Facebook post to convince someone to join Al-Qaeda, helping find unlicensed guns for sale and generating a procedure to create dangerous chemical substances at home, according to GPT-4’s system card, which lists the risks and safety measures OpenAI used to reduce or eliminate them.

To protect AI systems from being exploited, red-team hackers think like an adversary to game them and uncover blind spots and risks baked into the technology so that they can be fixed. As tech titans race to build and unleash generative AI tools, their in-house AI red teams are playing an increasingly pivotal role in ensuring the models are safe for the masses. Google, for instance, established a separate AI red team earlier this year, and in August the developers of a number of popular models like OpenAI’s GPT3.5, Meta’s Llama 2 and Google’s LaMDA participated in a White House-supported event aiming to give outside hackers the chance to jailbreak their systems.

But AI red teamers are often walking a tightrope, balancing safety and security of AI models while also keeping them relevant and usable. Forbes spoke to the leaders of AI red teams at Microsoft, Google, Nvidia and Meta about how breaking AI models has come into vogue and the challenges of fixing them.

Supporting the Open Source AI Community

We believe artificial intelligence has the power to save the world —and that a thriving open source ecosystem is essential to building this future.

Thankfully, the open source ecosystem is starting to develop, and we are now seeing open source models that rival closed-source alternatives. Hundreds of small teams and individuals are also working to make these models more useful, accessible, and performant.

These projects push the state of the art in open source AI and help provide a more robust and comprehensive understanding of the technology. They include: instruction-tuning base LLMs; removing censorship from LLM outputs; optimizing models for low-powered machines; building novel tooling for model inference; researching LLM security issues; and many others.

Hackers Can Silently Grab Your IP Through Skype. Microsoft Is In No Rush to Fix It

Hackers are able to grab a target’s IP address, potentially revealing their general physical location, by simply sending a link over the Skype mobile app. The target does not need to click the link or otherwise interact with the hacker beyond opening the message, according to a security researcher who demonstrated the issue and successfully discovered my IP address by using it.

Yossi, the independent security researcher who uncovered the vulnerability, reported the issue to Microsoft earlier this month, according to Yossi and a cache of emails and bug reports he shared with 404 Media. In those emails Microsoft said the issue does not require immediate servicing, and gave no indication that it plans to fix the security hole. Only after 404 Media contacted Microsoft for comment did the company say it would patch the issue in an upcoming update.

The attack could pose a serious risk to activists, political dissidents, journalists, those targeted by cybercriminals, and many more people. At minimum, an IP address can show what area of a city someone is in. An IP address can be even more revealing in a less densely populated area, because there are fewer people who could be associated with it.

This C++ code gets you administrator rights on vulnerable Windows 10 machine

CVE-2023–36874 is not just any vulnerability; rather, it is a zero-day that is being actively exploited. This indicates that the vulnerability was being exploited in the wild even before any remedy was provided, and in some cases, even before it was publicly acknowledged. Because they provide a window of opportunity before updates are sent out, vulnerabilities of this kind are often among the top targets for cybercriminals.

However, taking advantage of this vulnerability is not as simple as one may first believe it to be. According to the advisory notes published by Microsoft, “An attacker must have local access to the targeted machine and must be able to create folders and performance traces on the machine, with restricted privileges that normal users have by default.”

This significantly reduces the danger vector, but it does not remove it entirely. Because Windows is so prevalent throughout the world, even a very minor security flaw may put millions of machines at danger.

Jupiter X Core WordPress plugin vulnerabilities affect 172,000 websites

Accounts may be hijacked and data can be uploaded without authentication if a certain version of Jupiter X Core, a premium plugin for setting up WordPress and WooCommerce websites, is used. These vulnerabilities impact various versions of the plugin.

Jupiter X Core is a visual editor that is both simple and powerful, and it is a component of the Jupiter X theme. The Jupiter X theme is used in more than 172,000 websites.

The second flaw, identified as CVE-2023–38389, makes it possible for unauthenticated attackers to gain control of any WordPress user account so long as they are in possession of the user’s email address. The vulnerability has been given a critical severity level of 9.8 and affects all versions of Jupiter X Core beginning with 3.3.8 and below.

How to minimize data risk for generative AI and LLMs in the enterprise

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here

Enterprises have quickly recognized the power of generative AI to uncover new ideas and increase both developer and non-developer productivity. But pushing sensitive and proprietary data into publicly hosted large language models (LLMs) creates significant risks in security, privacy and governance. Businesses need to address these risks before they can start to see any benefit from these powerful new technologies.

As IDC notes, enterprises have legitimate concerns that LLMs may “learn” from their prompts and disclose proprietary information to other businesses that enter similar prompts. Businesses also worry that any sensitive data they share could be stored online and exposed to hackers or accidentally made public.

How AI brings greater accuracy, speed, and scale to microsegmentation

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here

Microsegmentation is table stakes for CISOs looking to gain the speed, scale and time-to-market advantages that multicloud tech stacks provide digital-first business initiatives.

Gartner predicts that through 2023, at least 99% of cloud security failures will be the user’s fault. Getting microsegmentation right in multicloud configurations can make or break any zero-trust initiative. Ninety percent of enterprises migrating to the cloud are adopting zero trust, but just 22% are confident their organization will capitalize on its many benefits and transform their business. Zscaler’s The State of Zero Trust Transformation 2023 Report says secure cloud transformation is impossible with legacy network security infrastructure such as firewalls and VPNs.

Advances in quantum emitters mark progress toward a quantum internet

The prospect of a quantum internet, connecting quantum computers and capable of highly secure data transmission, is enticing, but making it poses a formidable challenge. Transporting quantum information requires working with individual photons rather than the light sources used in conventional fiber optic networks.

To produce and manipulate , scientists are turning to quantum light emitters, also known as . These atomic-scale defects in semiconductor materials can emit single photons of fixed wavelength or color and allow photons to interact with electron spin properties in controlled ways.

A team of researchers has recently demonstrated a more effective technique for creating quantum emitters using pulsed ion beams, deepening our understanding of how are formed. The work was led by Department of Energy Lawrence Berkeley National Laboratory (Berkeley Lab) researchers Thomas Schenkel, Liang Tan, and Boubacar Kanté who is also an associate professor of electrical engineering and computer sciences at the University of California, Berkeley.