Toggle light / dark theme

https://www.hdiac.org/podcast/neuroweapons-part-1/

In part one of this two-part podcast, HDIAC analyst Mara Kiernan interviews Dr. James Giordano, a Professor in the department of Neurology and Biochemistry at Georgetown University Medical Center. The discussion begins with Dr. Giordano defining neuroweapons and explaining their applied technologies. He provides insight into the manner in which international weapons conventions govern the use neuroweapons and discusses the threats presented by neuroweapons in today’s environment. Dr. Giordano goes on to review the need for continuous monitoring, including his views regarding challenges and potential solutions for effectively understanding global developments in neuroweapon technologies.

Watch regular HDIAC webinars and video podcasts by subscribing:

Become a member of the Homeland Defense and Security Information Analysis Center: https://www.hdiac.org/register/

Cloudflare, the leading content delivery network and cloud security platform, wants to make AI accessible to developers.


While developers can use JavaScript to write AI inference code and deploy it to Cloudflare’s edge network, it is possible to invoke the models through a simple REST API using any language. This makes it easy to infuse generative AI into web, desktop and mobile applications that run in diverse environments.

In September 2023, Workers AI was initially launched with inference capabilities in seven cities. However, Cloudflare’s ambitious goal was to support Workers AI inference in 100 cities by the end of the year, with near-ubiquitous coverage by the end of 2024.

Cloudflare is one of the first CDN and edge network providers to enhance its edge network with AI capabilities through GPU-powered Workers AI, vector database and an AI Gateway for AI deployment management. Partnering with tech giants like Meta and Microsoft, it is offering a wide model catalog and ONNX Runtime optimization.

As connectivity continues to expand and the number of devices on a network with it, IoT’s ambition of creating a world of connected things grows. Yet, with pros, comes the cons, and the flip side of this is the growing security challenges that come with it too.

Security has been a perennial concern for IoT since it’s utilisation beyond its use for basic functions like tallying the stock levels of a soda machine. However, for something of such interest to the industry, plans for standardisation remain allusive. Instead, piece meal plans to ensure different elements of security, like zero trust for identity and access management for devices on a network, or network segmentation for containing breaches, are undertaken by different companies according to their needs.

Yet with the advancement of technology, things like quantum computing pose a risk to classic cryptography methods which, among other things, ensures data privacy is secure when being transferred from device to device or even to the Cloud.

This is a 1 hour general-audience introduction to Large Language Models: the core technical component behind systems like ChatGPT, Claude, and Bard. What they are, where they are headed, comparisons and analogies to present-day operating systems, and some of the security-related challenges of this new computing paradigm.
As of November 2023 (this field moves fast!).

Context: This video is based on the slides of a talk I gave recently at the AI Security Summit. The talk was not recorded but a lot of people came to me after and told me they liked it. Seeing as I had already put in one long weekend of work to make the slides, I decided to just tune them a bit, record this round 2 of the talk and upload it here on YouTube. Pardon the random background, that’s my hotel room during the thanksgiving break.

- Slides as PDF: https://drive.google.com/file/d/1pxx_ZI7O-Nwl7ZLNk5hI3WzAsTL…share_link (42MB)
- Slides. as Keynote: https://drive.google.com/file/d/1FPUpFMiCkMRKPFjhi9MAhby68MH…share_link (140MB)

Chapters:

Security researchers bypassed Windows Hello fingerprint authentication on Dell Inspiron, Lenovo ThinkPad, and Microsoft Surface Pro X laptops in attacks exploiting security flaws found in the embedded fingerprint sensors.

Blackwing Intelligence security researchers discovered vulnerabilities during research sponsored by Microsoft’s Offensive Research and Security Engineering (MORSE) to assess the security of the top three embedded fingerprint sensors used for Windows Hello fingerprint authentication.

Blackwing’s Jesse D’Aguanno and Timo Teräs targeted embedded fingerprint sensors made by ELAN, Synaptics, and Goodix on Microsoft Surface Pro X, Lenovo ThinkPad T14, and Dell Inspiron 15.

Good technologies disappear.


In the company’s cloud market study, almost all organizations say that security, reliability and disaster recovery are important considerations in their AI strategy. Also key is the need to manage and support AI workloads at scale. In the area of AI data rulings and regulation, many firms think that AI data governance requirements will force them to more comprehensively understand and track data sources, data age and other key data attributes.

“AI technologies will drive the need for new backup and data protection solutions,” said Debojyoti ‘Debo’ Dutta, vice president of engineering for AI at Nutanix. “[Many companies are] planning to add mission-critical, production-level data protection and Disaster Recovery (DR) solutions to support AI data governance. Security professionals are racing to use AI-based solutions to improve threat and anomaly detection, prevention and recovery while bad actors race to use AI-based tools to create new malicious applications, improve success rates and attack surfaces, and improve detection avoidance.”

While it’s fine to ‘invent’ gen-AI, putting it into motion evidently means thinking about its existence as a cloud workload in and of itself. With cloud computing still misunderstood in some quarters and the cloud-native epiphany not shared by every company, considering the additional strains (for want of a kinder term) that gen-AI puts on the cloud should make us think about AI as a cloud workload more directly and consider how we run it.

It is estimated that 95% of the planet’s population has access to broadband internet, via cable or a mobile network. However, there are still some places and situations in which staying connected can be very difficult. Quick responses are necessary in emergency situations, such as after an earthquake or during a conflict. So too are reliable telecommunications networks that are not susceptible to outages and damage to infrastructure, networks can be used to share data that is vital for people’s well-being.

A recent article, published in the journal Aerospace, proposes the use of nanosatellites to provide comprehensive and stable coverage in areas that are hard to reach using long-range communications. It is based on the bachelor’s and master’s degree final projects of Universitat Oberta de Catalunya (UOC) graduate David N. Barraca Ibort.

The paper is co-authored by Raúl Parada, a researcher at the Telecommunications Technological Center of Catalonia (CTTC/CERCA) and a course instructor with the UOC’s Faculty of Computer Science, Multimedia and Telecommunications; Carlos Monzo, a researcher and member of the same faculty; and Víctor Monzón, a researcher at the Interdisciplinary Center for Security Reliability and Trust at the University of Luxembourg.

As part of pioneering the security of satellite communication in space, NASA is funding a groundbreaking project at the University of Miami’s Frost Institute for Data Science and Computing (IDSC) which will enable augmenting traditional large satellites with nanosatellites or constellations of nanosatellites.

These nanosatellites are designed to accomplish diverse goals, ranging from communication and weather prediction to Earth science research and observational data gathering. Technical innovation is a hallmark of NASA, a global leader in the development of novel technologies that enable US space missions and translate to a wide variety of applications from Space and Earth science to consumer goods and to national and homeland security.

With advances in satellite technology and reduced cost of deployment and operation, nanosatellites also come with significant challenges for the protection of their communication networks. Specifically, small satellites are owned and operated by a wide variety of public and private sector organizations, expanding the attack surface for cyber exploitation. The scenario is similar to Wi-Fi network vulnerabilities. These systems provide an opportunity for adversaries to threaten national security as well as raise economic concerns for satellite companies, operators, and users.

On Wednesday, a collaborative whiteboard app maker called “tldraw” made waves online by releasing a prototype of a feature called “Make it Real” that lets users draw an image of software and bring it to life using AI. The feature uses OpenAI’s GPT-4V API to visually interpret a vector drawing into functioning Tailwind CSS and JavaScript web code that can replicate user interfaces or even create simple implementations of games like Breakout.

Users can experiment with a live demo of Make It Real online. However, running it requires providing an API key from OpenAI, which is a security risk. If others intercept your API key, they could use it to rack up a very large bill in your name (OpenAI charges by the amount of data moving into and out of its API). Those technically inclined can run the code locally, but it will still require OpenAI API access.