Toggle light / dark theme

Google’s aiming to make it easier to use and secure passwords — at least, for users of the Password Manager tool built into its Chrome browser.

Today, the tech giant announced that Password Manager, which generates unique passwords and autofills them across platforms, will soon gain biometric authentication on PC. (Android and iOS have had biometric authentication for some time.) When enabled, it’ll require an additional layer of security, like fingerprint recognition or facial recognition, before Chrome autofills passwords.

Exactly which types of biometrics are available in Password Manager on desktop will depend on the hardware attached to the PC, of course (e.g. a fingerprint reader), as well as whether the PC’s operating system supports it. Beyond “soon,” Google didn’t say when to expect the feature to arrive.

In the company’s quarterly earnings call earlier this month, CEO Tim Cook said Apple is planning to “weave” AI into its products, per The Independent. But he also cautioned about the future of the technology.

“I do think it’s very important to be deliberate and thoughtful in how you approach these things,” he said, per Inc. “And there’s a number of issues that need to be sorted as is being talked about in a number of different places, but the potential is certainly very interesting.”

Apple is also telling some employees to limit their use of ChatGPT and other external AI tools, according to an internal document seen by the Journal. That includes the automated coding tool Copilot, from the Microsoft-owned GitHub.

The intelligence community is mulling over how AI can pose a threat to national security.

The world is captivated by the rise of artificial intelligence (AI) tools like ChatGPT. And they have proved their worth in providing human-like answers to complex questions or even writing a research paper. While there are issues like ‘hallucination’ or grabbing and spouting out incorrect information from the internet, nations are concerned with a more significant issue when it comes to AI.

The intelligence agencies are now mulling over how AI can pose a threat to national security.


MysteryShot/iStock.

The intelligence agencies are now mulling over how AI can pose a threat to national security. In a recent interview with Bloomberg, a top U.S. spy official said intelligence agencies should use commercially available AI to keep up with foreign adversaries because their opponents will be doing the same.

Addresses doubts about data privacy and factual inaccuracies in AI responses.

OpenAI, the creator of the chatbot ChatGPT, has publicly spoken about the safety of AI and how it tries to keep its products safe for its users. The company had come under criticism following privacy breaches and started approaching the problem by rapidly releasing new iterations of its models.

Last week, Italy became the first Western country to put a temporary ban on the use of ChatGPT, citing privacy concerns.


BlackJack3D/iStock.

It is as weird as Saudi Arabia giving an AI citizenship.

Italy is the first Western country to prohibit the advanced chatbot ChatGPT according to authorities. The Italian data protection authorities expressed privacy concerns about the model, which was developed by the US start-up OpenAI and is supported by Microsoft.

Authorities also accused OpenAI of failing to verify the age of its ChatGPT users and of failing to enforce laws prohibiting users over the age of 13. Given their relative lack of development, these young users may be exposed to “unsuitable answers” from the chatbot, according to officials.


NurPhoto/Getty Images.

The Italian data protection authorities expressed privacy concerns about the model, which was developed by the US start-up OpenAI and is supported by Microsoft.

Following this trial, the bank will offer this service to its larger base of US merchant clients.

JP Morgan has announced plans to pilot biometric-based payments at select US retailers. It is one of the world’s largest payment-processing companies.

Pilot program roll-out.


Prostock-Studio/iStock.

This development comes at a time when biometric authentication is gaining popularity. Biometric tools are thought to be the most secure method of transaction authentication. According to Goode Intelligence, global biometric payments are expected to reach $5.8 trillion by 2026, with up to three billion users.

Conor russomanno, founder and CEO of openbci eva esteban, embedded software engineer at openbci

Galea is an award-winning platform that merges next-generation biometrics with mixed reality. It is the first device to integrate a wide range of physiological signals, including EEG, EMG, EDA, PPG, and eye-tracking, into a single headset. In this session, Conor and Eva will provide a live demonstration of the device and its capabilities, showcasing its potential for a variety of applications, from gaming to training and rehabilitation. They will give an overview of the different hardware and software components of the system, highlighting how it can be used to analyze user experiences in real time. Attendees will get an opportunity to ask questions at the end.

The concept of synthetic data is almost too good to be true – it can mimic the distinctive properties of a dataset while dodging a number of issues that afflict data. There are zero data privacy concerns around synthetic data since it is artificially generated and isn’t related to real-world persons. It can be manufactured on demand and in the volumes required. In other words, synthetic data is a boon in a world eternally thirsty for data.

And the hectic space of generative AI is offering a helping hand in the easy generation of synthetic data.

The concept of synthetic data has been around for decades until the autonomous vehicle (AV) industry started using it commercially in the mid-2010s. But for how important an issue it resolves, creating synthetic data brings a myriad of complications along with it.