Herbert Ong Brighter with Herbert.
Brazil bans Meta from using personal data for AI training, citing privacy concerns and risks to children. Meta has 5 days to comply or face fines.
Back in June, YouTube quietly made a subtle but significant policy change that, surprisingly, benefits users by allowing them to remove AI-made videos that simulate their appearance or voice from the platform under YouTube’s privacy request process.
First spotted by TechCrunch, the revised policy encourages affected parties to directly request the removal of AI-generated content on the grounds of privacy concerns and not for being, for example, misleading or fake. YouTube specifies that claims must be made by the affected individual or authorized representatives. Exceptions include parents or legal guardians acting on behalf of minors, legal representatives, and close family members filing on behalf of deceased individuals.
According to the new policy, if a privacy complaint is filed, YouTube will notify the uploader about the potential violation and provide an opportunity to remove or edit the private information within their video. YouTube may, at its own discretion, grant the uploader 48 hours to utilize the Trim or Blur tools available in YouTube Studio and remove parts of the footage from the video. If the uploader chooses to remove the video altogether, the complaint will be closed, but if the potential privacy violation remains within those 48 hours, the YouTube Team will review the complaint.
Large language models have emerged as a transformative technology and have revolutionized AI with their ability to generate human-like text with seemingly unprecedented fluency and apparent comprehension. Trained on vast datasets of human-generated text, LLMs have unlocked innovations across industries, from content creation and language translation to data analytics and code generation. Recent developments, like OpenAI’s GPT-4o, showcase multimodal capabilities, processing text, vision, and audio inputs in a single neural network.
Despite their potential for driving productivity and enabling new forms of human-machine collaboration, LLMs are still in their nascent stage. They face limitations such as factual inaccuracies, biases inherited from training data, lack of common-sense reasoning, and data privacy concerns. Techniques like retrieval augmented generation aim to ground LLM knowledge and improve accuracy.
To explore these issues, I spoke with Amir Feizpour, CEO and founder of AI Science, an expert-in-the-loop business workflow automation platform. We discussed the transformative impacts, applications, risks, and challenges of LLMs across different sectors, as well as the implications for startups in this space.
Microsoft’s AI-powered Recall feature sparked major privacy concerns. Now, it’s becoming an opt-in.
When Descartes said “I think therefore I am” he probably didn’t know that he was answering a security question. Using behavioral or physical characteristics to identify people, biometrics, has gotten a big boost in the EU. The Orwellian sounding HUMABIO (Human Monitoring and Authentication using Biodynamic Indicators and Behavioral Analysis) is a well funded research project that seeks to combine sensor technology with the latest in biometrics to find reliable and non-obtrusive ways to identify people quickly. One of their proposed methods: scanning your brain stem. That’s right, in addition to reading your retinas, looking at your finger prints, and monitoring your voice, the security systems of the future may be scanning your brain.
How could they actually read your brain? What kind of patterns would they use to authenticate your identity? Yeah, that haven’t quite figured that out yet. HUMABIO is still definitely in the “pre-commercial” and “proof of concept” phase. They do have a nice ethics manual to read, and they’ve actually written some “stories” that illustrate the uses of their various works in progress, but they haven’t produced a fieldable instrument yet. In fact, this aspect of the STREP (Specific Targeted Research Project) would hardly be remarkable if we didn’t know more about the available technology from other sources.
U.S. government warns of North Korean hackers sending spoofed emails to gather intelligence.
Scientists have demonstrated that facial recognition technology can predict a person’s political orientation with a surprising level of accuracy.
Researchers have demonstrated that facial recognition technology can predict political orientation from neutral expressions with notable accuracy, posing significant privacy concerns. This finding suggests our faces may reveal more personal information than previously understood.
An interesting new attack on biometric security has been outlined by a group of researchers from China and the US. PrintListener: Uncovering the Vulnerability of Fingerprint Authentication via the Finger Friction Sound [PDF] proposes a side-channel attack on the sophisticated Automatic Fingerprint Identification System (AFIS). The attack leverages the sound characteristics of a user’s finger swiping on a touchscreen to extract fingerprint pattern features. Following tests, the researchers assert that they can successfully attack “up to 27.9% of partial fingerprints and 9.3% of complete fingerprints within five attempts at the highest security FAR [False Acceptance Rate] setting of 0.01%.” This is claimed to be the first work that leverages swiping sounds to infer fingerprint information.
Biometric fingerprint security is widespread and widely trusted. If things continue as they are, it is thought that the fingerprint authentication market will be worth nearly $100 billion by 2032. However, organizations and people have become increasingly aware that attackers might want to steal their fingerprints, so some have started to be careful about keeping their fingerprints out of sight, and become sensitive to photos showing their hand details.
A patent from Samsung also suggests the ring will contain a biometric sensor and two narrow screens on the ring’s outside edges to display notifications.