Toggle light / dark theme

Today we are going to discuss the topic drug enforcement from a very interesting technological angle.

Brian Drake, is the Director of Artificial Intelligence for the Defense Intelligence Agency’s (DIA) Directorate of Science and Technology. Mr. Drake works with the DIA’s Future Capabilities and Innovation Office, and he also leads an initiative to test the effectiveness of different applications of artificial intelligence at solving various mission problems, including using AI to combat the opioid crisis with a DIA program known as SABLE SPEAR.

Previous to this role Brian was a Senior Intelligence Analyst and Branch Chief in the DIA’s Americas and Transregional Threats Center (ATTC) and prior to joining ATTC, Mr. Drake was a Management Analyst with DIA’s Chief of Staff.

For DIA’s intelligence analysis mission, he has worked worldwide targets in narcotics, emerging and disruptive technologies, and weapons of mass destruction.

Over the past few decades, technological advances have enabled the development of increasingly sophisticated, immersive and realistic video games. One of the most noteworthy among these advances is virtual reality (VR), which allows users to experience games or other simulated environments as if they were actually navigating them, via the use of electronic wearable devices.

Most existing VR systems primarily focus on the sense of vision, using headsets that allow users to see what is happening in a or in another simulated environment right before their eyes, rather than on a screen placed in front of them. While this can lead to highly engaging visual experiences, these experiences are not always matched by other types of sensory inputs.

Researchers at Nagoya University’s School of Informatics in Japan have recently created a new VR game that integrates immersive audiovisual experiences with . This game, presented in a paper published in the Journal of Robotics, Networking and Artificial Life, uses a player’s biometric data to create a spherical object in the VR space that beats in alignment with his/her heart. The player can thus perceive the beating of his/her heart via this object visually, auditorily and tactually.

Financial crime as a wider category of cybercrime continues to be one of the most potent of online threats, covering nefarious activities as diverse as fraud, money laundering and funding terrorism. Today, one of the startups that has been building data intelligence solutions to help combat that is announcing a fundraise to continue fueling its growth.

Ripjar, a U.K. company founded by five data scientists who previously worked together in British intelligence at the Government Communications Headquarters (GCHQ, the U.K.’s equivalent of the NSA), has raised $36.8 million (£28 million) in a Series B, money that it plans to use to continue expanding the scope of its AI platform — which it calls Labyrinth — and scaling the business.

Labyrinth, as Ripjar describes it, works with both structured and unstructured data, using natural language processing and an API-based platform that lets organizations incorporate any data source they would like to analyse and monitor for activity. It automatically and in real time checks these against other data sources like sanctions lists, politically exposed persons (PEPs) lists and transaction alerts.

Your accent can nod to where you come from; the pace of your speech can reveal your emotional state; your voiceprint can be used to identify you.

Linguists, companies and governments are now parsing our voices for these details, using them as biometric tools to uncover more and more information about us.

While a lot of this information is used to make our lives easier, it has also been used to controversial and worrying effect.

Biometrics may be the best way to protect society against the threat of deepfakes, but new solutions are being proposed by the Content Authority Initiative and the AI Foundation.

Deepfakes are the most serious criminal threat posed by artificial intelligence, according to a new report funded by the Dawes Centre for Future Crime at the University College London (UCL), among a list of the top 20 worries for criminal facilitation in the next 15 years.

The study is published in the journal Crime Science, and ranks the 20 AI-enabled crimes based on the harm they could cause.