Toggle light / dark theme

Not surprisingly, the Intelligence Community (IC), Department of Defense (DoD), and first responders at the Department of Homeland Security (DHS) and other agencies are also interested in wearable electronics. With its Smart Electrically Powered and Networked Textile Systems (SMART ePANTS) program, the Intelligence Advanced Research Projects Activity (IARPA) is delivering the largest single investment ever made1 to make Advanced Smart Textiles2 (AST) a reality.

According to SMART ePANTS Program Manager, Dr. Dawson Cagle, developing clothing with sensor systems that can record audio, video, and geolocation data would significantly improve the capabilities of IC, DoD, DHS staff, and others working in dangerous or high-stress environments, such as crime scenes and arms control inspections. Dr. Cagle also asserted that ASTs could collect information one doesn’t notice, which would increase job effectiveness.

In software application development environments, the consensus is gravitating towards the use of AI as a helping and testing mechanism, rather than it being wholly offered the chance to create software code in and of itself. The concept here is that if so-called citizen developer business laypeople start creating code with software robots, they will never be able to wield the customization power (and ability to cover security risks) that hard-core software developers have.

As we now grow with AI and start to become more assured in terms of where its impact should be felt, we may now logically look to the whole spectrum of automation that it offers. This involves the concept of so-called hypermodal AI i.e. intelligence capable of working in different ‘modes’, some of which will predict, some of which will help determine and some of which will generate.

Today describing itself as unified observability and security platform company (IT vendors are fond of changing their opening ‘elevator sell’ line every few years), Dynatrace has now expanded its Davis AI engine to create hypermodal AI that converges fact-based predictive AI, with causal AI insights with new generative AI capabilities.

Apple has rolled out security updates to iOS, iPadOS, macOS, tvOS, watchOS, and Safari to address several security vulnerabilities, including one actively exploited zero-day bug in the wild.

Tracked as CVE-2023–38606, the shortcoming resides in the kernel and permits a malicious app to modify sensitive kernel state potentially. The company said it was addressed with improved state management.

“Apple is aware of a report that this issue may have been actively exploited against versions of iOS released before iOS 15.7.1,” the tech giant noted in its advisory.

Zero-day vulnerabilities in Windows Installers for the Atera remote monitoring and management software could act as a springboard to launch privilege escalation attacks.

The flaws, discovered by Mandiant on February 28, 2023, have been assigned the identifiers CVE-2023–26077 and CVE-2023–26078, with the issues remediated in versions 1.8.3.7 and 1.8.4.9 released by Atera on April 17, 2023, and June 26, 2023, respectively.

“The ability to initiate an operation from a NT AUTHORITY\SYSTEM context can present potential security risks if not properly managed,” security researcher Andrew Oliveau said. “For instance, misconfigured Custom Actions running as NT AUTHORITY\SYSTEM can be exploited by attackers to execute local privilege escalation attacks.”

“The new research program, led by Associate Professor Adeel Razi, from the Turner Institute for Brain and Mental Health, in collaboration with Melbourne start-up Cortical Labs, involves growing around 800,000 brain cells living in a dish, which are then “taught” to perform goal-directed tasks. Last year the brain cells’ ability to perform a simple tennis-like computer game, Pong, received global attention for the team’s research.”


Monash University-led research into growing human brain cells onto silicon chips, with new continual learning capabilities to transform machine learning, has been awarded almost $600,000 AUD in the prestigious National Intelligence and Security Discovery Research Grants Program.

According to Associate Professor Razi, the research program’s work using lab-grown brain cells embedded onto silicon chips, “merges the fields of artificial intelligence and synthetic biology to create programmable biological computing platforms,” he said.

With US car thefts up 25.1% since 2019, it’s clear that high-tech key fob immobilizers aren’t cutting the mustard. But this might: UMich researchers have created a charmingly low-tech anti-theft device that turns the whole car into a security keypad.

Keyless entry and ignition are a brilliant step up in convenience from the old “stick key in hole and turn” method of starting cars, but thieves and hackers with a bit of know-how and some specialist gear are finding late-model keyless cars quick and easy to break into and steal. Between this kind of thing and Tik Tok car theft challenges, criminals are having a field day in the post-COVID era.

A team at the University of Michigan has come up with a fun solution that doesn’t use wireless signals at all. The “Battery Sleuth,” as they’ve called it, sits between the car’s battery and its electrical system, and measures fluctuations in voltage, looking for a specific set of voltage changes that act as a secret handshake of sorts between driver and car. Only when this handshake is complete will the device let the full power of the battery through to fire up the starter motor.

The Biden administration announced on Friday a voluntary agreement with seven leading AI companies, including Amazon, Google, and Microsoft. The move, ostensibly aimed at managing the risks posed by AI and protecting Americans’ rights and safety, has provoked a range of questions, the foremost being: What does the new voluntary AI agreement mean?

At first glance, the voluntary nature of these commitments looks promising. Regulation in the technology sector is always contentious, with companies wary of stifling growth and governments eager to avoid making mistakes. By sidestepping the direct imposition of command and control regulation, the administration can avoid the pitfalls of imposing… More.


That said, it’s not an entirely hollow gesture. It does emphasize important principles of safety, security, and trust in AI, and it reinforces the notion that companies should take responsibility for the potential societal impact of their technologies. Moreover, the administration’s focus on a cooperative approach, involving a broad range of stakeholders, hints at a potentially promising direction for future AI governance. However, we should also not forget the risk of government growing too cozy with industry.

Still, let’s not mistake this announcement for a seismic shift in AI regulation. We should consider this a not-very-significant step on the path to responsible AI. At the end of the day, what the government and these companies have done is put out a press release.

Secondly, remember that all cloud services are not created equal. So, take time to select a vendor that aligns with your firm’s security, scalability and regulatory compliance requirements. Implement robust security measures at your end and prepare a plan for data backups and disaster recovery.

Lastly, remember that change management is essential. Keep the lines of communication open, address concerns proactively and involve your team throughout the transition process. Navigating these challenges can set you on the path to successful digital transformation.

Are you all set to bid farewell to those paper mountains and extend a warm welcome to the digital cloud? Cloud storage equips your law firm not only to brace for the future but to lead the vanguard in this digital revolution. Bear in mind, the future isn’t a distant entity floating far beyond our reach; it’s here. So, what’s your next strategic move in this exciting game of legal digital transformation?

Microsoft will expand access to important security log data after being criticized for locking detailed audit logs behind a Microsoft 365 enterprise plan that costs $57 per user per month. The logging updates will start rolling out “in September 2023 to all government and commercial customers,” the company said.

“Over the coming months, we will include access to wider cloud security logs for our worldwide customers at no additional cost. As these changes take effect, customers can use Microsoft Purview Audit to centrally visualize more types of cloud log data generated across their enterprise,” Microsoft announced yesterday.

Microsoft Purview Audit Premium is available on the $57-per-user Microsoft 365 E5 plan for businesses as well as the similar A5 education plan and G5 government plan. There’s also a Purview Audit Standard service that comes with a much wider range of plans, including the Microsoft 365 Business Basic tier that costs $6 per user per month.

Remember what it’s like to twirl a sparkler on a summer night? Hold it still and the fire crackles and sparks but twirl it around and the light blurs into a line tracing each whirl and jag you make.

A new patented software system developed at Sandia National Laboratories can find the curves of motion in streaming video and images from satellites, drones and far-range security cameras and turn them into signals to find and track moving objects as small as one . The developers say this system can enhance the performance of any remote sensing application.

“Being able to track each pixel from a distance matters, and it is an ongoing and challenging problem,” said Tian Ma, a computer scientist and co-developer of the system. “For physical security surveillance systems, for example, the farther out you can detect a possible threat, the more time you have to prepare and respond. Often the biggest challenge is the simple fact that when objects are located far away from the sensors, their size naturally appears to be much smaller. Sensor sensitivity diminishes as the distance from the target increases.”