Toggle light / dark theme

Financial crime as a wider category of cybercrime continues to be one of the most potent of online threats, covering nefarious activities as diverse as fraud, money laundering and funding terrorism. Today, one of the startups that has been building data intelligence solutions to help combat that is announcing a fundraise to continue fueling its growth.

Ripjar, a U.K. company founded by five data scientists who previously worked together in British intelligence at the Government Communications Headquarters (GCHQ, the U.K.’s equivalent of the NSA), has raised $36.8 million (£28 million) in a Series B, money that it plans to use to continue expanding the scope of its AI platform — which it calls Labyrinth — and scaling the business.

Labyrinth, as Ripjar describes it, works with both structured and unstructured data, using natural language processing and an API-based platform that lets organizations incorporate any data source they would like to analyse and monitor for activity. It automatically and in real time checks these against other data sources like sanctions lists, politically exposed persons (PEPs) lists and transaction alerts.

Your accent can nod to where you come from; the pace of your speech can reveal your emotional state; your voiceprint can be used to identify you.

Linguists, companies and governments are now parsing our voices for these details, using them as biometric tools to uncover more and more information about us.

While a lot of this information is used to make our lives easier, it has also been used to controversial and worrying effect.

Biometrics may be the best way to protect society against the threat of deepfakes, but new solutions are being proposed by the Content Authority Initiative and the AI Foundation.

Deepfakes are the most serious criminal threat posed by artificial intelligence, according to a new report funded by the Dawes Centre for Future Crime at the University College London (UCL), among a list of the top 20 worries for criminal facilitation in the next 15 years.

The study is published in the journal Crime Science, and ranks the 20 AI-enabled crimes based on the harm they could cause.

On the higher end, they work to ensure that development is open in order to work on multiple cloud infrastructures, providing companies the ability to know that portability exists.

That openness is also why deep learning is not yet part of a solution. There is still not the transparency needed into the DL layers in order to have the trust necessary for privacy concerns. Rather, these systems aim to help manage information privacy for machine learning applications.

Artificial intelligence applications are not open, and can put privacy at risk. The addition of good tools to address privacy for data being used by AI systems is an important early step in adding trust into the AI equation.

A security expert revealed this week that an exploit commonly used against Windows users who own Microsoft Office can sneak into MacOS systems as well.

A former NSA security specialist who addressed the Black Hat security conference this week summarized his research into the new use for a very old exploit.

Patrick Wardle explained that the exploit capitalizes on the use of macros in Microsoft Office. Hackers have long used the approach to trick users into granting permission to activate the macros, which in turn surreptitiously launch .

Consumers are ending up increasingly responsive about sharing their data, as data integrity and security has turned into a developing concern. In any case, with the advent of nations teching up with facial recognition, even explorers need to truly begin thinking about what sort of data they could be reluctantly offering to nations, individuals and places.

Facial recognition innovation is a framework that is fit for identifying or confirming an individual from an advanced picture or a video frame. It works by comparing chosen facial highlights and faces inside a database. The technology is utilized in security frameworks and can be compared with different biometrics, for example, fingerprint or iris recognition frameworks. As of late, it has been grabbed and utilized as a business identification and advertising tool. The vast majority have a cell phone camera fit for recognizing features to perform exercises, for example, opening said a cell phone or making payments.

The worldwide market for facial recognition cameras and programming will be worth of an expected $7.8 billion, predicts Markets and Markets. Never again consigned to sci-fi films and books, the technology is being used in various vertical markets, from helping banks recognize clients to empowering governments to look out for criminals. Let’s look at some of the top countries adopting facial recognition technology.

Google has begun rolling out a feature that allows you to configure how long it can save data from all of the Google services you use, like maps, search and everything you do online. Until now, you had to manually delete this data or turn it off entirely. Deleting it means Google doesn’t always have enough information about you to make recommendations on what it thinks you’ll like, or where you might want to go.

Now, you can tell Google to automatically delete personal information after three months or 18 months. Here’s how you can do that.


Google has a new feature that will automatically delete the data it has on how you use its apps and what you do on the web.